Nov 23 06:45:46 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 06:45:46 crc restorecon[4738]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:46 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:45:47 crc restorecon[4738]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 06:45:48 crc kubenswrapper[4988]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.209362 4988 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220855 4988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220913 4988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220927 4988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220942 4988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220956 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220966 4988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220980 4988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.220994 4988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221004 4988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221017 4988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221027 4988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221037 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221047 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221056 4988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221064 4988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221072 4988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221079 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221087 4988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221104 4988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221112 4988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221119 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221127 4988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221137 4988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221149 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221159 4988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221167 4988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221175 4988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221184 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221258 4988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221268 4988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221277 4988 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221287 4988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221296 4988 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221303 4988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221312 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221320 4988 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221327 4988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221336 4988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221349 4988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221372 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221383 4988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221393 4988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221403 4988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221413 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221423 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221433 4988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221450 4988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221458 4988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221467 4988 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221476 4988 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221495 4988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221510 4988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221521 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221529 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221537 4988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221549 4988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221560 4988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221569 4988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221578 4988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221587 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221597 4988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221606 4988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221616 4988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221624 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221632 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221639 4988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221647 4988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221655 4988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221662 4988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221670 4988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.221679 4988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.221904 4988 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.221937 4988 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.221962 4988 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.221977 4988 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.221992 4988 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222004 4988 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222021 4988 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222036 4988 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222047 4988 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222059 4988 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222071 4988 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222088 4988 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222100 4988 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222111 4988 flags.go:64] FLAG: --cgroup-root="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222123 4988 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222133 4988 flags.go:64] FLAG: --client-ca-file="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222143 4988 flags.go:64] FLAG: --cloud-config="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222152 4988 flags.go:64] FLAG: --cloud-provider="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222161 4988 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222171 4988 flags.go:64] FLAG: --cluster-domain="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222180 4988 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222223 4988 flags.go:64] FLAG: --config-dir="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222233 4988 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222243 4988 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222256 4988 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222267 4988 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222279 4988 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222291 4988 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222303 4988 flags.go:64] FLAG: --contention-profiling="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222315 4988 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222326 4988 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222339 4988 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222352 4988 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222366 4988 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222377 4988 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222386 4988 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222395 4988 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222404 4988 flags.go:64] FLAG: --enable-server="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222413 4988 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222427 4988 flags.go:64] FLAG: --event-burst="100" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222436 4988 flags.go:64] FLAG: --event-qps="50" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222445 4988 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222454 4988 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222464 4988 flags.go:64] FLAG: --eviction-hard="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222475 4988 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222485 4988 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222493 4988 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222505 4988 flags.go:64] FLAG: --eviction-soft="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222514 4988 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222523 4988 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222532 4988 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222541 4988 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222550 4988 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222560 4988 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222568 4988 flags.go:64] FLAG: --feature-gates="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222579 4988 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222589 4988 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222598 4988 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222607 4988 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222616 4988 flags.go:64] FLAG: --healthz-port="10248" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222627 4988 flags.go:64] FLAG: --help="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222638 4988 flags.go:64] FLAG: --hostname-override="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222648 4988 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222661 4988 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222673 4988 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222684 4988 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222695 4988 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222707 4988 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222718 4988 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222726 4988 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222735 4988 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222744 4988 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222754 4988 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222763 4988 flags.go:64] FLAG: --kube-reserved="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222772 4988 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222781 4988 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222790 4988 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222800 4988 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222810 4988 flags.go:64] FLAG: --lock-file="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222819 4988 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222828 4988 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222837 4988 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222864 4988 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222875 4988 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222884 4988 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222894 4988 flags.go:64] FLAG: --logging-format="text" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222903 4988 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222913 4988 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222921 4988 flags.go:64] FLAG: --manifest-url="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222931 4988 flags.go:64] FLAG: --manifest-url-header="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222943 4988 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222952 4988 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222963 4988 flags.go:64] FLAG: --max-pods="110" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222972 4988 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222982 4988 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.222991 4988 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223000 4988 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223010 4988 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223019 4988 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223028 4988 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223055 4988 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223064 4988 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223073 4988 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223082 4988 flags.go:64] FLAG: --pod-cidr="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223091 4988 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223105 4988 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223113 4988 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223123 4988 flags.go:64] FLAG: --pods-per-core="0" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223132 4988 flags.go:64] FLAG: --port="10250" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223141 4988 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223150 4988 flags.go:64] FLAG: --provider-id="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223159 4988 flags.go:64] FLAG: --qos-reserved="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223168 4988 flags.go:64] FLAG: --read-only-port="10255" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223177 4988 flags.go:64] FLAG: --register-node="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223185 4988 flags.go:64] FLAG: --register-schedulable="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223223 4988 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223239 4988 flags.go:64] FLAG: --registry-burst="10" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223249 4988 flags.go:64] FLAG: --registry-qps="5" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223258 4988 flags.go:64] FLAG: --reserved-cpus="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223268 4988 flags.go:64] FLAG: --reserved-memory="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223280 4988 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223290 4988 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223299 4988 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223308 4988 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223316 4988 flags.go:64] FLAG: --runonce="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223326 4988 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223335 4988 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223346 4988 flags.go:64] FLAG: --seccomp-default="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223364 4988 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223376 4988 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223388 4988 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223400 4988 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223412 4988 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223424 4988 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223435 4988 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223446 4988 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223457 4988 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223469 4988 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223480 4988 flags.go:64] FLAG: --system-cgroups="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223491 4988 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223512 4988 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223524 4988 flags.go:64] FLAG: --tls-cert-file="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223534 4988 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223549 4988 flags.go:64] FLAG: --tls-min-version="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223560 4988 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223571 4988 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223582 4988 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223594 4988 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223607 4988 flags.go:64] FLAG: --v="2" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223623 4988 flags.go:64] FLAG: --version="false" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223638 4988 flags.go:64] FLAG: --vmodule="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223652 4988 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.223661 4988 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223901 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223913 4988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223935 4988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223945 4988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223956 4988 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223966 4988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223974 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223987 4988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.223995 4988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224004 4988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224012 4988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224021 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224029 4988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224037 4988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224046 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224054 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224062 4988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224072 4988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224081 4988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224092 4988 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224102 4988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224111 4988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224121 4988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224130 4988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224141 4988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224151 4988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224160 4988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224175 4988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224185 4988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224231 4988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224243 4988 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224252 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224261 4988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224271 4988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224281 4988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224291 4988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224301 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224310 4988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224321 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224336 4988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224346 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224355 4988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224363 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224370 4988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224378 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224387 4988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224397 4988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224407 4988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224417 4988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224427 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224436 4988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224447 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224456 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224466 4988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224475 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224484 4988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224495 4988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224504 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224514 4988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224524 4988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224533 4988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224540 4988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224548 4988 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224555 4988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224563 4988 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224571 4988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224579 4988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224586 4988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224596 4988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224605 4988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.224614 4988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.224642 4988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.239962 4988 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.240020 4988 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240159 4988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240174 4988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240183 4988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240221 4988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240230 4988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240238 4988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240246 4988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240255 4988 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240264 4988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240273 4988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240281 4988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240289 4988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240298 4988 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240307 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240315 4988 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240324 4988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240333 4988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240342 4988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240350 4988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240358 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240366 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240375 4988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240385 4988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240396 4988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240404 4988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240413 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240420 4988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240428 4988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240435 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240445 4988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240453 4988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240461 4988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240469 4988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240476 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240484 4988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240492 4988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240499 4988 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240507 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240514 4988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240522 4988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240530 4988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240540 4988 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240548 4988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240556 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240563 4988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240572 4988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240580 4988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240588 4988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240595 4988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240603 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240611 4988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240618 4988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240626 4988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240633 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240641 4988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240649 4988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240656 4988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240664 4988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240672 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240680 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240688 4988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240696 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240703 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240711 4988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240718 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240727 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240738 4988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240748 4988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240759 4988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240768 4988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.240779 4988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.240794 4988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241014 4988 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241026 4988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241035 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241044 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241051 4988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241062 4988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241074 4988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241083 4988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241093 4988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241102 4988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241110 4988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241118 4988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241126 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241134 4988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241142 4988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241149 4988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241157 4988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241164 4988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241172 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241180 4988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241187 4988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241253 4988 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241264 4988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241276 4988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241285 4988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241294 4988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241302 4988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241310 4988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241318 4988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241337 4988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241346 4988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241354 4988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241362 4988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241369 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241380 4988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241390 4988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241397 4988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241405 4988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241413 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241420 4988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241429 4988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241437 4988 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241444 4988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241452 4988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241459 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241467 4988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241475 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241483 4988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241491 4988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241498 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241506 4988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241513 4988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241522 4988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241529 4988 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241537 4988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241544 4988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241552 4988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241560 4988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241567 4988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241575 4988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241582 4988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241590 4988 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241597 4988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241605 4988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241613 4988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241623 4988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241631 4988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241639 4988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241647 4988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241657 4988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.241667 4988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.241681 4988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.241949 4988 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.248866 4988 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.249017 4988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.251187 4988 server.go:997] "Starting client certificate rotation" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.251277 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.251524 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 16:43:43.397941928 +0000 UTC Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.251691 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.284054 4988 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.288632 4988 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.288856 4988 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.315517 4988 log.go:25] "Validated CRI v1 runtime API" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.361149 4988 log.go:25] "Validated CRI v1 image API" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.364178 4988 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.373003 4988 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-06-40-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.373048 4988 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.397572 4988 manager.go:217] Machine: {Timestamp:2025-11-23 06:45:48.394058181 +0000 UTC m=+0.702570974 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e9018f38-2998-476c-ae0b-0f99d72a3f69 BootID:5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d0:9a:15 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d0:9a:15 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:06:c4:47 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:33:0d:52 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:9e:4b:b2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ed:74:f5 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:af:03:88 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:35:b0:4d:f5:36 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:46:f0:9c:f7:57:4e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.397855 4988 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.398126 4988 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.401358 4988 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.401745 4988 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.401826 4988 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.402228 4988 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.402250 4988 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.403070 4988 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.403111 4988 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.403491 4988 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.403645 4988 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.407548 4988 kubelet.go:418] "Attempting to sync node with API server" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.407594 4988 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.407650 4988 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.407687 4988 kubelet.go:324] "Adding apiserver pod source" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.407714 4988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.413497 4988 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.415371 4988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.415918 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.415967 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.416044 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.416083 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.417463 4988 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419654 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419682 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419691 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419701 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419716 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419725 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419734 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419773 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419783 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419793 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419818 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.419826 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.421204 4988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.421823 4988 server.go:1280] "Started kubelet" Nov 23 06:45:48 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.425549 4988 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.425373 4988 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.425376 4988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.427737 4988 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429412 4988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429501 4988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429565 4988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:35:31.240075921 +0000 UTC Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429728 4988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 814h49m42.810352422s for next certificate rotation Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429780 4988 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.429800 4988 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.429900 4988 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.430038 4988 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.435694 4988 factory.go:55] Registering systemd factory Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.435739 4988 factory.go:221] Registration of the systemd container factory successfully Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.435927 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.436047 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.436581 4988 factory.go:153] Registering CRI-O factory Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.436781 4988 factory.go:221] Registration of the crio container factory successfully Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.436940 4988 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.437049 4988 factory.go:103] Registering Raw factory Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.437131 4988 manager.go:1196] Started watching for new ooms in manager Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.437596 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="200ms" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.440435 4988 manager.go:319] Starting recovery of all containers Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.440929 4988 server.go:460] "Adding debug handlers to kubelet server" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.441275 4988 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.176:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a8fd4d9f92269 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 06:45:48.421792361 +0000 UTC m=+0.730305134,LastTimestamp:2025-11-23 06:45:48.421792361 +0000 UTC m=+0.730305134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452414 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452505 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452530 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452550 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452570 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452589 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452609 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452665 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452687 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452708 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452728 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452747 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452794 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452816 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452834 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452856 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452876 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452904 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452923 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452944 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452962 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.452984 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453002 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453022 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453042 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453069 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453103 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.453186 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454136 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454317 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454488 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454612 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454726 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454855 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.454976 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455101 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455253 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455395 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455519 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455645 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455793 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.455935 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456084 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456259 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456411 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456579 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456736 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.456881 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457020 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457148 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457319 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457480 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457640 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457779 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.457912 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458458 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458483 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458544 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458632 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458648 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458663 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458715 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458748 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458804 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458820 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458850 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458885 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458906 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458921 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458955 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.458972 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.459834 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.459985 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460116 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460285 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460399 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460496 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460612 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460719 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460840 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.460947 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461049 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461144 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461302 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461418 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461533 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461639 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461744 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461864 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.461981 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462102 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462235 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462333 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462432 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462523 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462639 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462743 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462830 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462907 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.462991 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463071 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463146 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463258 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463346 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463456 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463556 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463653 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463744 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463848 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.463967 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464086 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464178 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464316 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464418 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464528 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464656 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464754 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464889 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.464986 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465097 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465243 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465382 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465522 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465642 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465755 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465885 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.465992 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466104 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466248 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466365 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466417 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466444 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466464 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466489 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466512 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466531 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466551 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466574 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466603 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466622 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466643 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466663 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466684 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466718 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466743 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466763 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466784 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466804 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466861 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466882 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466904 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466924 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466943 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.466964 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468251 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468304 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468349 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468371 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468395 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468443 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.468464 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.469660 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.469824 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.469944 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470068 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470210 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470355 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470464 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470576 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470697 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470805 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.470923 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471054 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471189 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471347 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471471 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471606 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471726 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471847 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471966 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.472081 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.471098 4988 manager.go:324] Recovery completed Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.472228 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.472952 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473673 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473728 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473753 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473774 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473794 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473816 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473837 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473857 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473876 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473895 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473914 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.473937 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476820 4988 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476876 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476899 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476921 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476955 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476975 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.476995 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477013 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477033 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477055 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477074 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477096 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477117 4988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477134 4988 reconstruct.go:97] "Volume reconstruction finished" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.477147 4988 reconciler.go:26] "Reconciler: start to sync state" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.492112 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.492164 4988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494732 4988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494801 4988 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494843 4988 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494850 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494904 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.494920 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.494916 4988 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.495767 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.495864 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.496125 4988 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.496148 4988 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.496174 4988 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.520899 4988 policy_none.go:49] "None policy: Start" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.523898 4988 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.523948 4988 state_mem.go:35] "Initializing new in-memory state store" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.531092 4988 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.590466 4988 manager.go:334] "Starting Device Plugin manager" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.590794 4988 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.590823 4988 server.go:79] "Starting device plugin registration server" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.591805 4988 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.591836 4988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.592320 4988 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.592483 4988 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.592532 4988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.594970 4988 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.595044 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597057 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597074 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597267 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597768 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.597873 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.598728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.598781 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.598800 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.598932 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.599169 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.599289 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600508 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600566 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600591 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600608 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600662 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.600895 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.601043 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.601099 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.601881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.601918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.601936 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602480 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602508 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602548 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602520 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602623 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602761 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602844 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.602886 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604080 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604129 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604150 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604331 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604364 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604614 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.604670 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.605061 4988 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.605718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.605746 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.605757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.638472 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="400ms" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.679713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.679928 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.679993 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680034 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680203 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680332 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680433 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680554 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680718 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.680819 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.681087 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.681179 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.681271 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.681438 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.681543 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.692237 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.693515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.693556 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.693572 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.693608 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.694076 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.783127 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.783617 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.783861 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.784082 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.783726 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.783380 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.784403 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.784119 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.784947 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785018 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785183 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785682 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785798 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.785910 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786021 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786057 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786074 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786123 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786135 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786145 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786168 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786181 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786245 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786283 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786316 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786340 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786314 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.786551 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.894305 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.901034 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.901117 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.901141 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.901303 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:48 crc kubenswrapper[4988]: E1123 06:45:48.901911 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.931707 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.949964 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.958788 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.976158 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: I1123 06:45:48.982062 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.993021 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c46e71f073bb73f448801eed72a99494b9a8ff250eb5e5f9d19f9c2e400334ed WatchSource:0}: Error finding container c46e71f073bb73f448801eed72a99494b9a8ff250eb5e5f9d19f9c2e400334ed: Status 404 returned error can't find the container with id c46e71f073bb73f448801eed72a99494b9a8ff250eb5e5f9d19f9c2e400334ed Nov 23 06:45:48 crc kubenswrapper[4988]: W1123 06:45:48.994731 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e684f7cb980f84805b398d92a89b7e2c02fe28b077be6e7196327ad42d98f09c WatchSource:0}: Error finding container e684f7cb980f84805b398d92a89b7e2c02fe28b077be6e7196327ad42d98f09c: Status 404 returned error can't find the container with id e684f7cb980f84805b398d92a89b7e2c02fe28b077be6e7196327ad42d98f09c Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.009793 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-70b98ba9cce0c2712805f865b697d35dec70129a5eb873c1614fe43e97fb2c23 WatchSource:0}: Error finding container 70b98ba9cce0c2712805f865b697d35dec70129a5eb873c1614fe43e97fb2c23: Status 404 returned error can't find the container with id 70b98ba9cce0c2712805f865b697d35dec70129a5eb873c1614fe43e97fb2c23 Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.021687 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7dc4e2f238ab57a33f276049e8b2254640f9cbbdd10b11228b2af4cdd7001eff WatchSource:0}: Error finding container 7dc4e2f238ab57a33f276049e8b2254640f9cbbdd10b11228b2af4cdd7001eff: Status 404 returned error can't find the container with id 7dc4e2f238ab57a33f276049e8b2254640f9cbbdd10b11228b2af4cdd7001eff Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.027436 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6b5631874cc9bf34b006ab2352046f62bf583e42c815733760067922c856c692 WatchSource:0}: Error finding container 6b5631874cc9bf34b006ab2352046f62bf583e42c815733760067922c856c692: Status 404 returned error can't find the container with id 6b5631874cc9bf34b006ab2352046f62bf583e42c815733760067922c856c692 Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.040298 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="800ms" Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.268667 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.268749 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.302590 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.304360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.304410 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.304427 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.304461 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.304976 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.376791 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.376867 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.427698 4988 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.498968 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b5631874cc9bf34b006ab2352046f62bf583e42c815733760067922c856c692"} Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.500136 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7dc4e2f238ab57a33f276049e8b2254640f9cbbdd10b11228b2af4cdd7001eff"} Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.501330 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"70b98ba9cce0c2712805f865b697d35dec70129a5eb873c1614fe43e97fb2c23"} Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.504342 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e684f7cb980f84805b398d92a89b7e2c02fe28b077be6e7196327ad42d98f09c"} Nov 23 06:45:49 crc kubenswrapper[4988]: I1123 06:45:49.505459 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c46e71f073bb73f448801eed72a99494b9a8ff250eb5e5f9d19f9c2e400334ed"} Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.627733 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.627850 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:49 crc kubenswrapper[4988]: W1123 06:45:49.726012 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.726149 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:49 crc kubenswrapper[4988]: E1123 06:45:49.840932 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="1.6s" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.106016 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.107808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.107875 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.107893 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.107932 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:50 crc kubenswrapper[4988]: E1123 06:45:50.108714 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.337136 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:45:50 crc kubenswrapper[4988]: E1123 06:45:50.338356 4988 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.427543 4988 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.511332 4988 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d" exitCode=0 Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.511431 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.511478 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.512441 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.512474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.512486 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.514054 4988 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56" exitCode=0 Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.514103 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.514121 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.514326 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.519227 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.519284 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.519298 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520048 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520139 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520380 4988 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910" exitCode=0 Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520572 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.520704 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.523718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.523789 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.523820 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.524354 4988 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551" exitCode=0 Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.524513 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.524455 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.526458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.526530 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.526567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.534168 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.534283 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4"} Nov 23 06:45:50 crc kubenswrapper[4988]: I1123 06:45:50.534305 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763"} Nov 23 06:45:51 crc kubenswrapper[4988]: W1123 06:45:51.146034 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:51 crc kubenswrapper[4988]: E1123 06:45:51.146178 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.427670 4988 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:51 crc kubenswrapper[4988]: E1123 06:45:51.442560 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.176:6443: connect: connection refused" interval="3.2s" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.540988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.541041 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.541055 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.541068 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.544689 4988 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49" exitCode=0 Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.544786 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.544898 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.546682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.546750 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.546772 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.551343 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5b182d4cdc5c7a501ed4181a04e799c276f2dd1da1b45b213bd34aa6ed03dce0"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.551398 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.552852 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.552903 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.552920 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.556162 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.556234 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.556249 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.556349 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.557734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.557766 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.557778 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.560473 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b"} Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.560640 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.561699 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.561729 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.561739 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.709461 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.710529 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.710553 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.710561 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.710581 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:51 crc kubenswrapper[4988]: E1123 06:45:51.710929 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.176:6443: connect: connection refused" node="crc" Nov 23 06:45:51 crc kubenswrapper[4988]: I1123 06:45:51.849616 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:45:52 crc kubenswrapper[4988]: W1123 06:45:52.049702 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:52 crc kubenswrapper[4988]: E1123 06:45:52.050300 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:52 crc kubenswrapper[4988]: W1123 06:45:52.253574 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.176:6443: connect: connection refused Nov 23 06:45:52 crc kubenswrapper[4988]: E1123 06:45:52.253661 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.176:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.574452 4988 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975" exitCode=0 Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.574578 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975"} Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.574638 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.576178 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.576227 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.576238 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.581355 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.581981 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.582332 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02"} Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.582400 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.582794 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.583540 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.583570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.583579 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584056 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584078 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584088 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584547 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584575 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:52 crc kubenswrapper[4988]: I1123 06:45:52.584996 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.118057 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.169575 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590796 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1"} Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590937 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590948 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0"} Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590954 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590967 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c"} Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.590940 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.591130 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d"} Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592615 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592629 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592650 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.592758 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:53 crc kubenswrapper[4988]: I1123 06:45:53.806621 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.598687 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.599603 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.599686 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.599587 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c"} Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600603 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600652 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600738 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600763 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600884 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.600903 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.740538 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.776072 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.911503 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.913414 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.913500 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.913518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:54 crc kubenswrapper[4988]: I1123 06:45:54.913565 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.600967 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.601029 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602178 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602297 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602322 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602260 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.602362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:55 crc kubenswrapper[4988]: I1123 06:45:55.680600 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:45:56 crc kubenswrapper[4988]: I1123 06:45:56.603471 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:56 crc kubenswrapper[4988]: I1123 06:45:56.604555 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:56 crc kubenswrapper[4988]: I1123 06:45:56.604591 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:56 crc kubenswrapper[4988]: I1123 06:45:56.604602 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.250082 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.250354 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.252117 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.252161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.252179 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.790917 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.791117 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.792666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.792715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.792731 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:57 crc kubenswrapper[4988]: I1123 06:45:57.798355 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:45:58 crc kubenswrapper[4988]: E1123 06:45:58.605188 4988 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:45:58 crc kubenswrapper[4988]: I1123 06:45:58.608516 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:58 crc kubenswrapper[4988]: I1123 06:45:58.609770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:58 crc kubenswrapper[4988]: I1123 06:45:58.609838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:58 crc kubenswrapper[4988]: I1123 06:45:58.609857 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:45:59 crc kubenswrapper[4988]: I1123 06:45:59.008741 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 06:45:59 crc kubenswrapper[4988]: I1123 06:45:59.009030 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:45:59 crc kubenswrapper[4988]: I1123 06:45:59.010675 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:45:59 crc kubenswrapper[4988]: I1123 06:45:59.010744 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:45:59 crc kubenswrapper[4988]: I1123 06:45:59.010766 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:00 crc kubenswrapper[4988]: I1123 06:46:00.250927 4988 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:46:00 crc kubenswrapper[4988]: I1123 06:46:00.251034 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.428337 4988 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.621757 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.624917 4988 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02" exitCode=255 Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.625007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02"} Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.625242 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.626457 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.626524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.626543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.627527 4988 scope.go:117] "RemoveContainer" containerID="3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02" Nov 23 06:46:02 crc kubenswrapper[4988]: W1123 06:46:02.650184 4988 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.650336 4988 trace.go:236] Trace[464524773]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:45:52.649) (total time: 10001ms): Nov 23 06:46:02 crc kubenswrapper[4988]: Trace[464524773]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:46:02.650) Nov 23 06:46:02 crc kubenswrapper[4988]: Trace[464524773]: [10.001163331s] [10.001163331s] END Nov 23 06:46:02 crc kubenswrapper[4988]: E1123 06:46:02.650372 4988 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.841576 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.841757 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.842759 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.842805 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:02 crc kubenswrapper[4988]: I1123 06:46:02.842818 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.243772 4988 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.243857 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.253153 4988 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.253271 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.631880 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.634124 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e"} Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.634386 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.635789 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.635851 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.635870 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.813253 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.813445 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.815076 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.815139 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:03 crc kubenswrapper[4988]: I1123 06:46:03.815159 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.785624 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.785832 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.785987 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.787591 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.787635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.787654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:04 crc kubenswrapper[4988]: I1123 06:46:04.793843 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:05 crc kubenswrapper[4988]: I1123 06:46:05.640629 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:05 crc kubenswrapper[4988]: I1123 06:46:05.641988 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:05 crc kubenswrapper[4988]: I1123 06:46:05.642069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:05 crc kubenswrapper[4988]: I1123 06:46:05.642107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:06 crc kubenswrapper[4988]: I1123 06:46:06.643481 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:06 crc kubenswrapper[4988]: I1123 06:46:06.644756 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:06 crc kubenswrapper[4988]: I1123 06:46:06.644806 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:06 crc kubenswrapper[4988]: I1123 06:46:06.644823 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.112793 4988 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.239290 4988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.242526 4988 trace.go:236] Trace[1776044370]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:45:55.360) (total time: 12882ms): Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[1776044370]: ---"Objects listed" error: 12882ms (06:46:08.242) Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[1776044370]: [12.882210192s] [12.882210192s] END Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.242566 4988 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.243055 4988 trace.go:236] Trace[1690805298]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:45:55.958) (total time: 12284ms): Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[1690805298]: ---"Objects listed" error: 12284ms (06:46:08.242) Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[1690805298]: [12.284067749s] [12.284067749s] END Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.243080 4988 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.248772 4988 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.249511 4988 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.250811 4988 trace.go:236] Trace[907946125]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:45:56.164) (total time: 12086ms): Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[907946125]: ---"Objects listed" error: 12085ms (06:46:08.250) Nov 23 06:46:08 crc kubenswrapper[4988]: Trace[907946125]: [12.086039057s] [12.086039057s] END Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.250866 4988 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.259152 4988 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.291976 4988 csr.go:261] certificate signing request csr-2cchb is approved, waiting to be issued Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.307439 4988 csr.go:257] certificate signing request csr-2cchb is issued Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.419533 4988 apiserver.go:52] "Watching apiserver" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.429900 4988 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.430278 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.430674 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.430810 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.430928 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.430987 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.431005 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.431005 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.431213 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.431254 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.431295 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.437000 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.437513 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4486d"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.437674 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.437788 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.438891 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.440437 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.440595 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.441003 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.441070 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.441797 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.441892 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.445845 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.445964 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.450427 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.479978 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.491784 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.493398 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.502400 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.518838 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.528084 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.531238 4988 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.537419 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.545789 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551042 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551098 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551149 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551172 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551220 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551243 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551265 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551289 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551346 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551369 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551412 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551422 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551479 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551503 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551505 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551504 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551567 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551752 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551780 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.551829 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552076 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552168 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552334 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552461 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552523 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552483 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552505 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552604 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552630 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552655 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552703 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552725 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552714 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552747 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552834 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552759 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552873 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552847 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552907 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552939 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552969 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552984 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552993 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553029 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553054 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553079 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553104 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553137 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553161 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553186 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553234 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553294 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553318 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553362 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553387 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553413 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553443 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553466 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553491 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553517 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553539 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553560 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553585 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553646 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553674 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553700 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553725 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553748 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553773 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553801 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553828 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553855 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553878 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553916 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553951 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553981 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554059 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554375 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554400 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.552991 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553063 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553091 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553593 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554545 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554554 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553687 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553745 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553818 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553890 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.553978 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554144 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554265 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554295 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554668 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554424 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554821 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554870 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554870 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554896 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554922 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554946 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555358 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555494 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555537 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555672 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555781 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.555958 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556262 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556539 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556566 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556568 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556713 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556871 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556871 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.556923 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557026 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557237 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557282 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557588 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557618 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557634 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557841 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.557883 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558115 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558127 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558443 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.554381 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558638 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558775 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.558827 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560291 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560646 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560696 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560835 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560904 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560955 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560977 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561104 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561257 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561592 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.560996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561760 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561775 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561835 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.561888 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562247 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562277 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562330 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562350 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562397 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562409 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562421 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562420 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562498 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562527 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562554 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562575 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562593 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562611 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562631 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562651 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562667 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562685 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562703 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562877 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562900 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562927 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562946 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562964 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562980 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.562996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563015 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563033 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563050 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563067 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563090 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563107 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563122 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563137 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563154 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563171 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563202 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563219 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563236 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563254 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563274 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563294 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563313 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563332 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563353 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563372 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563391 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563411 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563429 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563447 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563466 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563482 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563498 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563517 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563535 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563553 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563569 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563612 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563629 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563648 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563664 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563684 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563701 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563718 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563734 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563751 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563767 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563796 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563829 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.563849 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564423 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564478 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564498 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564515 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564532 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564606 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564625 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564642 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564659 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564677 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564692 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564710 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564727 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564743 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564760 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564777 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564793 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564830 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564883 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564901 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564919 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564936 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564951 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564967 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.564986 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565020 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565037 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565055 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565078 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565095 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565113 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565130 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565148 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565166 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565181 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565212 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565455 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565482 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565500 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565517 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565534 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565552 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565571 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565624 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.567156 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.567287 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.566667 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565646 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565733 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.565790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.566431 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.566580 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.566653 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.568109 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.568254 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.568602 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.568960 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569148 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569285 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569386 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569564 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569839 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.569925 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.069871606 +0000 UTC m=+21.378384369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.569953 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.570161 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.570536 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.570906 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571323 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571418 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571340 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571044 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571768 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571757 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571914 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.571865 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572108 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572157 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572183 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572039 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572479 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572824 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572918 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.572965 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.573111 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.573180 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.573589 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.573797 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.574985 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575292 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575499 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575821 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575928 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.575961 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576106 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576631 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576701 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576757 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576941 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576963 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.576978 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.577235 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.577240 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.577301 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.577542 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.577799 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578003 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578048 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578089 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578107 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578088 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578542 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578725 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578732 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579344 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579434 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579485 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579550 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cfec62c-b172-462b-b3d4-360ffed40b72-hosts-file\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.579706 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579894 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckjjs\" (UniqueName: \"kubernetes.io/projected/4cfec62c-b172-462b-b3d4-360ffed40b72-kube-api-access-ckjjs\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579960 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580013 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580043 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580099 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580125 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580183 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580602 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580665 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.580673 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.080641641 +0000 UTC m=+21.389154404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.579860 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.578478 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580037 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580119 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580235 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580761 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.579886 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580737 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580741 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580282 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580848 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580878 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580705 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.580928 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.580961 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.080950079 +0000 UTC m=+21.389462842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.581150 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.581219 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.582711 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.582746 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.582751 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.582692 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.582797 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583008 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583023 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583352 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583394 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583576 4988 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.583909 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.584015 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.584271 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.586977 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587014 4988 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587031 4988 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587042 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587054 4988 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587065 4988 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587076 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587320 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587472 4988 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587512 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587532 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588004 4988 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588021 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588035 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588046 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588057 4988 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588070 4988 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588082 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588101 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588121 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588137 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588153 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588169 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588185 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588226 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588242 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588261 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588274 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588285 4988 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588298 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588310 4988 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588320 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588332 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588345 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588356 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588368 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587426 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.587652 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588424 4988 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588442 4988 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588457 4988 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588470 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588484 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588495 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588507 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588521 4988 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588532 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588555 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588569 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588581 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588592 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588604 4988 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588616 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588627 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588638 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588650 4988 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588661 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588673 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588685 4988 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588699 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588665 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588711 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.588901 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589211 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589341 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589364 4988 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589380 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589408 4988 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589423 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589435 4988 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589449 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589460 4988 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589480 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589490 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589500 4988 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589511 4988 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589525 4988 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589539 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589562 4988 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589574 4988 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.589717 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.590387 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.590820 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.590924 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.590997 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.591346 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.592687 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593466 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593489 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593499 4988 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593510 4988 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593520 4988 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593529 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593538 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593550 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593560 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593570 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593582 4988 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593592 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593604 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593615 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593625 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593635 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593645 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593656 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593665 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593675 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593685 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593694 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593705 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593714 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593724 4988 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593745 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593757 4988 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593766 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593777 4988 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593787 4988 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593797 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593808 4988 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593817 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593828 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593837 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593848 4988 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593856 4988 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593864 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593875 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593887 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593906 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593915 4988 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593924 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593933 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593941 4988 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593950 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593959 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593969 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593979 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593988 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.593997 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594006 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594016 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594027 4988 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594036 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594056 4988 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594065 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.594076 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.595277 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.595373 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.596934 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.596958 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.596971 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.597034 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.097014505 +0000 UTC m=+21.405527268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.598007 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.598042 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.598054 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.598098 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.098087971 +0000 UTC m=+21.406600954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.599456 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600263 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600279 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600356 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600379 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600700 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.600776 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.601206 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.601277 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.601887 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.602676 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.603783 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.603840 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.603822 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.604015 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.604272 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.604371 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.604481 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.604694 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.610708 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.612772 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.614808 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.615430 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.620598 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.622776 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.625082 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.635042 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.636023 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.644894 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.652045 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.656532 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.666254 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.674263 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.683824 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.692433 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694573 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckjjs\" (UniqueName: \"kubernetes.io/projected/4cfec62c-b172-462b-b3d4-360ffed40b72-kube-api-access-ckjjs\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694643 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694665 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694708 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cfec62c-b172-462b-b3d4-360ffed40b72-hosts-file\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694738 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694749 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694758 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694785 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694803 4988 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694812 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694823 4988 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694832 4988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694817 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694830 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694860 4988 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694915 4988 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694912 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4cfec62c-b172-462b-b3d4-360ffed40b72-hosts-file\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694928 4988 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694942 4988 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694956 4988 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694968 4988 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694980 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.694991 4988 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695003 4988 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695021 4988 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695031 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695042 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695053 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695064 4988 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695076 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695086 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695095 4988 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695105 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695115 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695124 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695144 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695154 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695170 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695179 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695189 4988 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695211 4988 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695221 4988 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695231 4988 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695240 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695250 4988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695262 4988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695271 4988 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695281 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695291 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695300 4988 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695310 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695321 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695331 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695342 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695352 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695363 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695372 4988 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695381 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695392 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695401 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695412 4988 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695422 4988 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695432 4988 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695442 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695451 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695461 4988 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.695470 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.704859 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.713014 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckjjs\" (UniqueName: \"kubernetes.io/projected/4cfec62c-b172-462b-b3d4-360ffed40b72-kube-api-access-ckjjs\") pod \"node-resolver-4486d\" (UID: \"4cfec62c-b172-462b-b3d4-360ffed40b72\") " pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.715310 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.725161 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.735767 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.744747 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.748794 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.757206 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.758992 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: W1123 06:46:08.766097 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-49d5072415a396a9d0c153673a30ce50c16f000c568428234ccbb8d29faf9544 WatchSource:0}: Error finding container 49d5072415a396a9d0c153673a30ce50c16f000c568428234ccbb8d29faf9544: Status 404 returned error can't find the container with id 49d5072415a396a9d0c153673a30ce50c16f000c568428234ccbb8d29faf9544 Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.767372 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:46:08 crc kubenswrapper[4988]: W1123 06:46:08.768436 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-a931a4e48cadead0f93fd624eab62223be2f5d3a67a8a2f615c1777d368accfa WatchSource:0}: Error finding container a931a4e48cadead0f93fd624eab62223be2f5d3a67a8a2f615c1777d368accfa: Status 404 returned error can't find the container with id a931a4e48cadead0f93fd624eab62223be2f5d3a67a8a2f615c1777d368accfa Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.774209 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4486d" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.825627 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jnwbw"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.826002 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.828282 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxqnz"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.828876 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4p82c"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829011 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l5wgs"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829278 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829374 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:08 crc kubenswrapper[4988]: E1123 06:46:08.829435 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829686 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829700 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4kp4h"] Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829888 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829783 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829755 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.829937 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.830052 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.833440 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.836128 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.836729 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.836881 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.837128 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.837387 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.837658 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.837775 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.837929 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.838060 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.838258 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.838582 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.838990 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.840685 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.840706 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.843014 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.852085 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.862937 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.872532 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.883371 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.893973 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897025 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvt6\" (UniqueName: \"kubernetes.io/projected/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-kube-api-access-gvvt6\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897065 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897086 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-cnibin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897112 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897141 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897163 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-daemon-config\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897179 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897211 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897229 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfnn\" (UniqueName: \"kubernetes.io/projected/1a94eb06-d03a-43c9-8004-73d48280435f-kube-api-access-7kfnn\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897246 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-kubelet\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897262 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897278 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897300 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897317 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897338 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-cni-binary-copy\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897353 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897369 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897384 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-socket-dir-parent\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897398 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897413 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-bin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897427 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-multus-certs\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897444 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-rootfs\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897458 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897474 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897494 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-k8s-cni-cncf-io\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897510 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897526 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897556 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdwf\" (UniqueName: \"kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897571 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-os-release\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897592 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897608 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-etc-kubernetes\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897624 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dbf\" (UniqueName: \"kubernetes.io/projected/0dde7218-bd4b-4585-b049-cb8db163fdac-kube-api-access-z6dbf\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897641 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-netns\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.897656 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-multus\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898342 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-conf-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898385 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898426 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-proxy-tls\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898450 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-hostroot\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898484 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898504 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898526 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898548 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.898567 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-system-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.906577 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.914130 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.925462 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.937437 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.948396 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.957079 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.968509 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.981797 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:08 crc kubenswrapper[4988]: I1123 06:46:08.992344 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000583 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000786 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-proxy-tls\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000829 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000852 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-hostroot\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-system-cni-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000941 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-system-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.000991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001013 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001031 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001059 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvt6\" (UniqueName: \"kubernetes.io/projected/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-kube-api-access-gvvt6\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001085 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001088 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-hostroot\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001156 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-cnibin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001109 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-cnibin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001237 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-system-cni-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001291 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001327 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001358 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-binary-copy\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001376 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001385 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001408 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001431 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfnn\" (UniqueName: \"kubernetes.io/projected/1a94eb06-d03a-43c9-8004-73d48280435f-kube-api-access-7kfnn\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001452 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-daemon-config\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001480 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001500 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001511 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001583 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-kubelet\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001621 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzcqn\" (UniqueName: \"kubernetes.io/projected/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-kube-api-access-gzcqn\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001668 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001717 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-cni-binary-copy\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001744 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cnibin\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001769 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001796 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001822 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001827 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001851 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001875 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001880 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-socket-dir-parent\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001909 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001935 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-bin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001967 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-multus-certs\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.001991 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-os-release\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002023 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002045 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-k8s-cni-cncf-io\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002073 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-rootfs\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002097 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002122 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdwf\" (UniqueName: \"kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002143 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-os-release\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002169 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002222 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002245 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002268 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-etc-kubernetes\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002293 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-netns\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002316 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-multus\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002312 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-conf-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002349 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002365 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dbf\" (UniqueName: \"kubernetes.io/projected/0dde7218-bd4b-4585-b049-cb8db163fdac-kube-api-access-z6dbf\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002391 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002392 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-kubelet\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002313 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002437 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002495 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002496 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-rootfs\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002521 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-k8s-cni-cncf-io\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002500 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002537 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002557 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-conf-dir\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002551 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002567 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002579 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-multus\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002592 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-etc-kubernetes\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002584 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-var-lib-cni-bin\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002619 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002666 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-daemon-config\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002714 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-multus-certs\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002714 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002747 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-multus-socket-dir-parent\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002763 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-host-run-netns\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002800 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.002845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0dde7218-bd4b-4585-b049-cb8db163fdac-os-release\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.003083 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.003166 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0dde7218-bd4b-4585-b049-cb8db163fdac-cni-binary-copy\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.003204 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.003292 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:09.503273149 +0000 UTC m=+21.811786102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.003972 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.006871 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-proxy-tls\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.007542 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.020647 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfnn\" (UniqueName: \"kubernetes.io/projected/1a94eb06-d03a-43c9-8004-73d48280435f-kube-api-access-7kfnn\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.024726 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdwf\" (UniqueName: \"kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf\") pod \"ovnkube-node-bxqnz\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.027333 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.028226 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dbf\" (UniqueName: \"kubernetes.io/projected/0dde7218-bd4b-4585-b049-cb8db163fdac-kube-api-access-z6dbf\") pod \"multus-4p82c\" (UID: \"0dde7218-bd4b-4585-b049-cb8db163fdac\") " pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.029040 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvt6\" (UniqueName: \"kubernetes.io/projected/a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1-kube-api-access-gvvt6\") pod \"machine-config-daemon-jnwbw\" (UID: \"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\") " pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.050338 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.087927 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.102904 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103001 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzcqn\" (UniqueName: \"kubernetes.io/projected/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-kube-api-access-gzcqn\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103028 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cnibin\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103046 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103060 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103094 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-os-release\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103125 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103143 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103158 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-system-cni-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103185 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103224 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103250 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-binary-copy\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103413 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-os-release\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103449 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-system-cni-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103507 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.103491659 +0000 UTC m=+22.412004422 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103559 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103583 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103595 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103632 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.103620062 +0000 UTC m=+22.412132825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cnibin\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.103914 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-binary-copy\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103981 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.103996 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104004 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104015 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104028 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.104020622 +0000 UTC m=+22.412533385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104071 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.104052793 +0000 UTC m=+22.412565576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104090 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.104121 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.104107964 +0000 UTC m=+22.412620727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.104464 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.104674 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.126809 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.158270 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzcqn\" (UniqueName: \"kubernetes.io/projected/638ab0f4-59cd-4702-9e1d-bd3c3a5078e3-kube-api-access-gzcqn\") pod \"multus-additional-cni-plugins-4kp4h\" (UID: \"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\") " pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.188935 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.210739 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:46:09 crc kubenswrapper[4988]: W1123 06:46:09.223324 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eace8c_a9a8_4d0b_ab50_c9cfde78b7c1.slice/crio-adaca03e98ed3734f49904612e28263cf9b380247e4bd0081a8e342a1723f3a8 WatchSource:0}: Error finding container adaca03e98ed3734f49904612e28263cf9b380247e4bd0081a8e342a1723f3a8: Status 404 returned error can't find the container with id adaca03e98ed3734f49904612e28263cf9b380247e4bd0081a8e342a1723f3a8 Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.226666 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.231178 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4p82c" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.237143 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:09 crc kubenswrapper[4988]: W1123 06:46:09.247040 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dde7218_bd4b_4585_b049_cb8db163fdac.slice/crio-a9e4d6b14d2173e82576383e4a290b1c4ae506db1b0a85a027e217aea0ff5d06 WatchSource:0}: Error finding container a9e4d6b14d2173e82576383e4a290b1c4ae506db1b0a85a027e217aea0ff5d06: Status 404 returned error can't find the container with id a9e4d6b14d2173e82576383e4a290b1c4ae506db1b0a85a027e217aea0ff5d06 Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.256654 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" Nov 23 06:46:09 crc kubenswrapper[4988]: W1123 06:46:09.257527 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb5bfadf_3097_45a0_a0d8_2b75e4c1e931.slice/crio-1c2f447fceae5d2e56e4f2800c4665e6888b7f2b9fd30d462b5b18dcf20c847d WatchSource:0}: Error finding container 1c2f447fceae5d2e56e4f2800c4665e6888b7f2b9fd30d462b5b18dcf20c847d: Status 404 returned error can't find the container with id 1c2f447fceae5d2e56e4f2800c4665e6888b7f2b9fd30d462b5b18dcf20c847d Nov 23 06:46:09 crc kubenswrapper[4988]: W1123 06:46:09.302370 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod638ab0f4_59cd_4702_9e1d_bd3c3a5078e3.slice/crio-2529ef3150a3a4506709d7ea186aef590be01665762b49143a0399ddfc31c047 WatchSource:0}: Error finding container 2529ef3150a3a4506709d7ea186aef590be01665762b49143a0399ddfc31c047: Status 404 returned error can't find the container with id 2529ef3150a3a4506709d7ea186aef590be01665762b49143a0399ddfc31c047 Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.309334 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-23 06:41:08 +0000 UTC, rotation deadline is 2026-08-11 16:54:17.739299443 +0000 UTC Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.309392 4988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6274h8m8.429910249s for next certificate rotation Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.495112 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.495278 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.509059 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.509256 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.509324 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:10.509305262 +0000 UTC m=+22.817818025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.652623 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.652687 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.652699 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ad460e5d24984ee2a7557edae2cd5f296d8ad3695b1c923e56306d655e84818b"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.654727 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerStarted","Data":"c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.654773 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerStarted","Data":"2529ef3150a3a4506709d7ea186aef590be01665762b49143a0399ddfc31c047"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.656816 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.656867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.656885 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"adaca03e98ed3734f49904612e28263cf9b380247e4bd0081a8e342a1723f3a8"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.658741 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.658770 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"49d5072415a396a9d0c153673a30ce50c16f000c568428234ccbb8d29faf9544"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.661251 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.661845 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.664136 4988 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e" exitCode=255 Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.664231 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.664371 4988 scope.go:117] "RemoveContainer" containerID="3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.665278 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.665783 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4486d" event={"ID":"4cfec62c-b172-462b-b3d4-360ffed40b72","Type":"ContainerStarted","Data":"5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.665815 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4486d" event={"ID":"4cfec62c-b172-462b-b3d4-360ffed40b72","Type":"ContainerStarted","Data":"38ede4d2abdb533b422d3828857b3a68b2d893c80f43c6f030fb87c4c594066b"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.667379 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a931a4e48cadead0f93fd624eab62223be2f5d3a67a8a2f615c1777d368accfa"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.669035 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de" exitCode=0 Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.669085 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.669146 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"1c2f447fceae5d2e56e4f2800c4665e6888b7f2b9fd30d462b5b18dcf20c847d"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.670469 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerStarted","Data":"26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.670492 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerStarted","Data":"a9e4d6b14d2173e82576383e4a290b1c4ae506db1b0a85a027e217aea0ff5d06"} Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.675997 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.692588 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.707462 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.717874 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.729068 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.738616 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.752226 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.765919 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.776436 4988 scope.go:117] "RemoveContainer" containerID="8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e" Nov 23 06:46:09 crc kubenswrapper[4988]: E1123 06:46:09.776675 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.778947 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.779678 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.816491 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.832696 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.847948 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.865030 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a6cb839500d8568c2a202fccdcafb293bb434b742749ac92adb0c4a87732e02\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:02Z\\\",\\\"message\\\":\\\"W1123 06:45:51.786358 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:45:51.786771 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880351 cert, and key in /tmp/serving-cert-4109283190/serving-signer.crt, /tmp/serving-cert-4109283190/serving-signer.key\\\\nI1123 06:45:52.025581 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:45:52.028251 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:45:52.028468 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:45:52.031393 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4109283190/tls.crt::/tmp/serving-cert-4109283190/tls.key\\\\\\\"\\\\nF1123 06:46:02.362802 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.877009 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.893187 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.930809 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:09 crc kubenswrapper[4988]: I1123 06:46:09.970562 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:09Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.015482 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.052007 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.113750 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.114993 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.115142 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115278 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.115223987 +0000 UTC m=+24.423736750 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115284 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.115337 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115380 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.11537219 +0000 UTC m=+24.423884953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.115423 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115436 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115490 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.115474893 +0000 UTC m=+24.423987656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.115512 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115619 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115643 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115659 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115697 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115712 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115726 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115699 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.115690988 +0000 UTC m=+24.424203751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.115772 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.11576551 +0000 UTC m=+24.424278273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.129780 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.168939 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.211233 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.250879 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.303971 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.348950 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.495955 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.495985 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.495955 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.496085 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.496130 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.496225 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.499722 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.500467 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.501176 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.501817 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.502419 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.502931 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.504633 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.505232 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.514822 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.515356 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.515861 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.516710 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.517308 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.518521 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.519043 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.519542 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.520088 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.520336 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.520461 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.520521 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:12.520507027 +0000 UTC m=+24.829019780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.520549 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.521090 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.521661 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.522111 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.522694 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.523114 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.523749 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.527133 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.527984 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.529333 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.530129 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.531508 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.532025 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.532536 4988 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.532640 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.533933 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.535581 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.536024 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.537655 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.538690 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.539243 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.540346 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.541014 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.541692 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.542657 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.543683 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.544325 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.545137 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.545754 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.546648 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.547382 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.548269 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.548761 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.549266 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.550140 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.550786 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.551628 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.680082 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.682593 4988 scope.go:117] "RemoveContainer" containerID="8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e" Nov 23 06:46:10 crc kubenswrapper[4988]: E1123 06:46:10.682784 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.685540 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08" exitCode=0 Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.685576 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689851 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689884 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689893 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689917 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689925 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.689933 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842"} Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.714018 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.740959 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.754895 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.772878 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.787531 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.802800 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.815164 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.830364 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.843539 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.864568 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.887102 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.903554 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.924711 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.941013 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.954150 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:10 crc kubenswrapper[4988]: I1123 06:46:10.973749 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:10Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.011023 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.051012 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.095682 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.132544 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.177865 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.236285 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.254317 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.293000 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.336819 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.371686 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.416570 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.470168 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.495311 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:11 crc kubenswrapper[4988]: E1123 06:46:11.495455 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.695460 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerStarted","Data":"8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef"} Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.716913 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.730366 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.752650 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.767281 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.783690 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.800353 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.816337 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.841881 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.855306 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.867832 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.891785 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.936459 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:11 crc kubenswrapper[4988]: I1123 06:46:11.979263 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.014531 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.085622 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vzk8l"] Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.086263 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.090347 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.090347 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.091438 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.102139 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.130426 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.142355 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.142490 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.142529 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.142560 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.142593 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.142760 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.142779 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.142795 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.142850 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.142830436 +0000 UTC m=+28.451343219 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143266 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.143249766 +0000 UTC m=+28.451762539 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143353 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143371 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143384 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143379 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143425 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.14341339 +0000 UTC m=+28.451926173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143479 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.143455411 +0000 UTC m=+28.451968174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143534 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.143660 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.143632505 +0000 UTC m=+28.452145308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.173132 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.214616 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.243674 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-serviceca\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.243735 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmzbr\" (UniqueName: \"kubernetes.io/projected/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-kube-api-access-vmzbr\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.243795 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-host\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.254107 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.297220 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.335005 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.344985 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-host\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.345072 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-serviceca\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.345126 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmzbr\" (UniqueName: \"kubernetes.io/projected/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-kube-api-access-vmzbr\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.345137 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-host\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.346233 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-serviceca\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.370686 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.405669 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmzbr\" (UniqueName: \"kubernetes.io/projected/33c56f7a-abc6-48c2-bfe8-53019ba9ed90-kube-api-access-vmzbr\") pod \"node-ca-vzk8l\" (UID: \"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\") " pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.466977 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.480981 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.496095 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.496254 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.496271 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.496335 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.496441 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.496543 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.518093 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.547705 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.547866 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: E1123 06:46:12.547964 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:16.547939181 +0000 UTC m=+28.856451954 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.556582 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.594334 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.632926 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.676676 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.700444 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef" exitCode=0 Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.700485 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef"} Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.703324 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa"} Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.703762 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vzk8l" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.719303 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.756745 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.788522 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.829995 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.871921 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.884938 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.906229 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.908874 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.932341 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 23 06:46:12 crc kubenswrapper[4988]: I1123 06:46:12.971932 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.011123 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.048974 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.090077 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.131691 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.169574 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.217740 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.251217 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.295401 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.336896 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.378540 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.413432 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.458290 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.491703 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.495975 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:13 crc kubenswrapper[4988]: E1123 06:46:13.496168 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.535468 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.578397 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.615586 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.655046 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.715851 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.717057 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3"} Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.719147 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vzk8l" event={"ID":"33c56f7a-abc6-48c2-bfe8-53019ba9ed90","Type":"ContainerStarted","Data":"3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f"} Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.719256 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vzk8l" event={"ID":"33c56f7a-abc6-48c2-bfe8-53019ba9ed90","Type":"ContainerStarted","Data":"d9a0a3094041fe250d126399b455de0fc562db7093f8951875a51bccbfd59c23"} Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.723554 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8" exitCode=0 Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.723642 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8"} Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.754110 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: E1123 06:46:13.763051 4988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.795629 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.835831 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.879748 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.910873 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.953687 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:13 crc kubenswrapper[4988]: I1123 06:46:13.992562 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.030088 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.071435 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.119391 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.150276 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.196843 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.233921 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.277021 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.314542 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.357982 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.397224 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.435982 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.476761 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.495669 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.495757 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.495693 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.495860 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.496042 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.496266 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.529886 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.561100 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.595896 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.632886 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.650232 4988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.653832 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.654381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.654410 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.654609 4988 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.663905 4988 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.664263 4988 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.665748 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.665799 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.665813 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.665837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.665852 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.687606 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.692408 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.692480 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.692499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.692525 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.692543 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.712424 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.717626 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.717671 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.717688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.717711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.717728 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.732520 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3" exitCode=0 Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.732649 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3"} Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.738809 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.743501 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.743717 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.743884 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.744046 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.744181 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.764694 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771074 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771122 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771142 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771189 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.771080 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.790512 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: E1123 06:46:14.790870 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.793154 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.793236 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.793257 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.793315 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.793339 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.795711 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.846186 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.863416 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.886008 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.897037 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.897083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.897095 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.897112 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.897124 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.914725 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.953535 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.991627 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.999045 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.999075 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.999096 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.999111 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:14 crc kubenswrapper[4988]: I1123 06:46:14.999119 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:14Z","lastTransitionTime":"2025-11-23T06:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.031661 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.071630 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.100909 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.100932 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.100941 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.100954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.100963 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.109857 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.158913 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.195859 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.203560 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.203620 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.203639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.203666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.203683 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.238594 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.274527 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.306993 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.307030 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.307046 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.307065 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.307080 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.314448 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.410140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.410181 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.410233 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.410257 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.410274 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.495949 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:15 crc kubenswrapper[4988]: E1123 06:46:15.496122 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.512891 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.512938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.512956 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.512980 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.512998 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.615699 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.615763 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.615782 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.615807 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.615828 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.719000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.719060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.719077 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.719105 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.719124 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.741626 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerStarted","Data":"e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.748633 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.748971 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.749003 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.760563 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.779400 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.800843 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.812388 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.815466 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.822021 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.822069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.822085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.822106 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.822123 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.824862 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.850738 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.889428 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.916991 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.925309 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.925356 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.925368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.925384 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.925394 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:15Z","lastTransitionTime":"2025-11-23T06:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.938370 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.955543 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.966777 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:15 crc kubenswrapper[4988]: I1123 06:46:15.984283 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.000290 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:15Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.013694 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.026134 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.027462 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.027502 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.027512 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.027534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.027544 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.044233 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.063262 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.076351 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.092455 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.112348 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.129890 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.129949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.129968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.129990 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.130032 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.131179 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.153938 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.188569 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.191989 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.192083 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.192109 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.192132 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.192148 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192272 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.192239269 +0000 UTC m=+36.500752062 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192290 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192311 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192322 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192362 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.192349652 +0000 UTC m=+36.500862415 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192377 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192433 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.192419754 +0000 UTC m=+36.500932547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192540 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192685 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.192639299 +0000 UTC m=+36.501152092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192549 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192735 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192755 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.192804 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.192790753 +0000 UTC m=+36.501303546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.232640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.232709 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.232727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.232752 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.232770 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.236864 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.271868 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.315505 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.335679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.335751 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.335773 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.335808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.335831 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.355303 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.391263 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.433412 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.439020 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.439274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.439412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.439597 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.439774 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.489535 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.495671 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.495700 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.495882 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.496114 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.496301 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.496643 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.515712 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.542598 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.542654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.542672 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.542698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.542716 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.556291 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.595669 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.596658 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.596835 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: E1123 06:46:16.596939 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:24.596913074 +0000 UTC m=+36.905425867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.645699 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.645753 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.645765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.645783 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.645795 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.749049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.749114 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.749131 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.749155 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.749172 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.761180 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682" exitCode=0 Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.761252 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.761480 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.788432 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.809944 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.812621 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.831700 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.848963 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.855525 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.855594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.855614 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.855638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.855653 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.872266 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.897042 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.917028 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.935011 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.954549 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.958469 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.958492 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.958500 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.958512 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.958521 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:16Z","lastTransitionTime":"2025-11-23T06:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:16 crc kubenswrapper[4988]: I1123 06:46:16.990901 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.033278 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.062703 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.062758 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.062775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.062800 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.062819 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.073014 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.118145 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.157672 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.165311 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.165360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.165372 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.165390 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.165404 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.196000 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.235739 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.268036 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.268069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.268080 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.268095 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.268107 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.371696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.371757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.371775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.371808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.371825 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.476318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.476374 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.476392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.476417 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.476442 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.496010 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:17 crc kubenswrapper[4988]: E1123 06:46:17.496185 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.579012 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.579078 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.579097 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.579124 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.579141 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.682459 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.682491 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.682500 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.682540 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.682554 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.768019 4988 generic.go:334] "Generic (PLEG): container finished" podID="638ab0f4-59cd-4702-9e1d-bd3c3a5078e3" containerID="59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f" exitCode=0 Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.768067 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerDied","Data":"59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784614 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784856 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784900 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.784943 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.798184 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.817107 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.831794 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.844878 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.857681 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.873939 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.884662 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.889317 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.889341 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.889348 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.889361 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.889370 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.897170 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.910278 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.926612 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.941840 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.953513 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.973629 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.986705 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.993014 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.993251 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.993367 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.993822 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:17 crc kubenswrapper[4988]: I1123 06:46:17.993934 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:17Z","lastTransitionTime":"2025-11-23T06:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.002215 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.096638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.096681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.096691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.096706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.096717 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.200111 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.200160 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.200171 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.200210 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.200224 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.250614 4988 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.302263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.302294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.302303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.302316 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.302324 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.405643 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.405687 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.405707 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.405731 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.405746 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.496255 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:18 crc kubenswrapper[4988]: E1123 06:46:18.496399 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.496580 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.496607 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:18 crc kubenswrapper[4988]: E1123 06:46:18.496838 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:18 crc kubenswrapper[4988]: E1123 06:46:18.496906 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.508463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.508692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.508843 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.508917 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.509619 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.510735 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.536000 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.556376 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.575792 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.585422 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.605150 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.606980 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.607898 4988 scope.go:117] "RemoveContainer" containerID="8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e" Nov 23 06:46:18 crc kubenswrapper[4988]: E1123 06:46:18.608137 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.611811 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.611842 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.611854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.611872 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.611885 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.621709 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.634302 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.649890 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.666922 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.683580 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.698369 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.713942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.714184 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.714324 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.714459 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.714562 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.718597 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.730424 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.741621 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.752603 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.774090 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/0.log" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.781025 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd" exitCode=1 Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.781171 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.785076 4988 scope.go:117] "RemoveContainer" containerID="9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.789389 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" event={"ID":"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3","Type":"ContainerStarted","Data":"703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.797565 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.817660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.817698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.817709 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.817728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.817740 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.820015 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.836670 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.850247 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.868365 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.885822 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.901240 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.915833 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.921649 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.921835 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.922042 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.922137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.922235 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:18Z","lastTransitionTime":"2025-11-23T06:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.930443 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.946398 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:18 crc kubenswrapper[4988]: I1123 06:46:18.975084 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"message\\\":\\\"6:18.380329 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:46:18.380355 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:46:18.382632 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:46:18.382636 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:46:18.382699 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:46:18.382758 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:46:18.382793 6223 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:46:18.382851 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:46:18.382889 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:46:18.382927 6223 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:46:18.382936 6223 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:46:18.382959 6223 factory.go:656] Stopping watch factory\\\\nI1123 06:46:18.382982 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:46:18.382987 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:46:18.383004 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:46:18.383044 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.001852 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.026166 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.026209 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.026217 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.026314 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.026325 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.032912 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.067830 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.111021 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.129303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.129332 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.129341 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.129354 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.129365 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.149900 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.201796 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.231392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.231437 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.231449 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.231466 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.231478 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.234152 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.285226 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.313141 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.333742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.333782 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.333795 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.333816 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.333830 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.351932 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.390982 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.429472 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.436327 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.436367 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.436376 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.436390 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.436399 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.476725 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"message\\\":\\\"6:18.380329 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:46:18.380355 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:46:18.382632 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:46:18.382636 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:46:18.382699 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:46:18.382758 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:46:18.382793 6223 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:46:18.382851 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:46:18.382889 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:46:18.382927 6223 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:46:18.382936 6223 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:46:18.382959 6223 factory.go:656] Stopping watch factory\\\\nI1123 06:46:18.382982 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:46:18.382987 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:46:18.383004 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:46:18.383044 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.495955 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:19 crc kubenswrapper[4988]: E1123 06:46:19.496101 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.511370 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.539567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.539614 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.539625 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.539638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.539650 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.559088 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.594309 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.641453 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.642651 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.642681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.642691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.642705 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.642716 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.673499 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.711983 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.745295 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.745361 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.745381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.745409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.745431 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.752859 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.794989 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/1.log" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.796186 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/0.log" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.796719 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.801721 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9" exitCode=1 Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.801790 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.801851 4988 scope.go:117] "RemoveContainer" containerID="9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.802702 4988 scope.go:117] "RemoveContainer" containerID="fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9" Nov 23 06:46:19 crc kubenswrapper[4988]: E1123 06:46:19.802909 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.832942 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.847118 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.847163 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.847173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.847222 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.847236 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.874581 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b20782fa654e6b1bf133eb56470e90cb3bbed92798fcb372a707bda5ecbf2fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"message\\\":\\\"6:18.380329 6223 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1123 06:46:18.380355 6223 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1123 06:46:18.382632 6223 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1123 06:46:18.382636 6223 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:46:18.382699 6223 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:46:18.382758 6223 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:46:18.382793 6223 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:46:18.382851 6223 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:46:18.382889 6223 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:46:18.382927 6223 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:46:18.382936 6223 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:46:18.382959 6223 factory.go:656] Stopping watch factory\\\\nI1123 06:46:18.382982 6223 ovnkube.go:599] Stopped ovnkube\\\\nI1123 06:46:18.382987 6223 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:46:18.383004 6223 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:46:18.383044 6223 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.913040 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.950333 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.950386 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.950396 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.950412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.950421 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:19Z","lastTransitionTime":"2025-11-23T06:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.951477 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:19 crc kubenswrapper[4988]: I1123 06:46:19.993050 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.035948 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.053609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.053670 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.053687 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.053711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.053729 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.076008 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.149349 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.156481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.156536 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.156551 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.156573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.156588 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.176370 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.195442 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.234565 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.261300 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.261331 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.261341 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.261358 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.261370 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.274175 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.327329 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.359186 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.363441 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.363506 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.363524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.363552 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.363571 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.396150 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.433836 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.477597 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.477658 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.477681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.477719 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.477737 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.495330 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:20 crc kubenswrapper[4988]: E1123 06:46:20.495482 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.495827 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:20 crc kubenswrapper[4988]: E1123 06:46:20.496004 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.495823 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:20 crc kubenswrapper[4988]: E1123 06:46:20.496129 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.580957 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.581020 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.581036 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.581060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.581076 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.684474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.684528 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.684546 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.684569 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.684585 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.787473 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.787551 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.787573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.787599 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.787622 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.809276 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/1.log" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.815231 4988 scope.go:117] "RemoveContainer" containerID="fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9" Nov 23 06:46:20 crc kubenswrapper[4988]: E1123 06:46:20.815482 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.835853 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.857364 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.877901 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.890054 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.890130 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.890154 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.890185 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.890240 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.899229 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.932121 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.952724 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.971394 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.987286 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:20Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.993534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.993599 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.993622 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.993653 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:20 crc kubenswrapper[4988]: I1123 06:46:20.993680 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:20Z","lastTransitionTime":"2025-11-23T06:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.007512 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.025918 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.057799 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.081405 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.097305 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.097676 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.097817 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.097954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.098080 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.104317 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.120789 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.141464 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.155933 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:21Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.201688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.202073 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.202309 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.202474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.202838 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.306388 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.306772 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.306954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.307104 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.307270 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.410849 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.410913 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.410934 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.410965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.410986 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.495463 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:21 crc kubenswrapper[4988]: E1123 06:46:21.495705 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.513944 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.514264 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.514426 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.514599 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.514758 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.618372 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.618443 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.618460 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.618487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.618504 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.722265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.722351 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.722373 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.722405 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.722424 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.825496 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.825816 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.825912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.826030 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.826178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.929067 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.929617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.929718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.930031 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:21 crc kubenswrapper[4988]: I1123 06:46:21.930159 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:21Z","lastTransitionTime":"2025-11-23T06:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.033161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.033252 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.033270 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.033293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.033309 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.136438 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.136815 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.136958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.137099 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.137293 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.240862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.240938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.240960 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.241649 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.241699 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.345673 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.345727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.345742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.345761 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.345776 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.448781 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.448851 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.448869 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.448895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.448914 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.468837 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z"] Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.469417 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.472040 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.472905 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.488834 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.495980 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.496050 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:22 crc kubenswrapper[4988]: E1123 06:46:22.496218 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.496251 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:22 crc kubenswrapper[4988]: E1123 06:46:22.496490 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:22 crc kubenswrapper[4988]: E1123 06:46:22.496722 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.506514 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.531122 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.548961 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.552305 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.552377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.552403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.552433 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.552456 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.564429 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705d6107-c270-4f2f-9cc6-d5994972096d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.564497 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.564542 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hvrk\" (UniqueName: \"kubernetes.io/projected/705d6107-c270-4f2f-9cc6-d5994972096d-kube-api-access-7hvrk\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.564611 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.575107 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.596044 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.613639 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.630221 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.651841 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.655585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.655666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.655691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.655728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.655760 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.665597 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705d6107-c270-4f2f-9cc6-d5994972096d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.665663 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.665724 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hvrk\" (UniqueName: \"kubernetes.io/projected/705d6107-c270-4f2f-9cc6-d5994972096d-kube-api-access-7hvrk\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.665875 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.667029 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.667108 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/705d6107-c270-4f2f-9cc6-d5994972096d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.670453 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.675293 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/705d6107-c270-4f2f-9cc6-d5994972096d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.686764 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.698844 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hvrk\" (UniqueName: \"kubernetes.io/projected/705d6107-c270-4f2f-9cc6-d5994972096d-kube-api-access-7hvrk\") pod \"ovnkube-control-plane-749d76644c-mwg2z\" (UID: \"705d6107-c270-4f2f-9cc6-d5994972096d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.715997 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.736807 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.758609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.758696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.758716 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.758748 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.758769 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.760380 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.782706 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.790976 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.801917 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.819739 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" event={"ID":"705d6107-c270-4f2f-9cc6-d5994972096d","Type":"ContainerStarted","Data":"128460cbbf5eb70c93dacd98b99c94c896c9e29e8b94333dba90108ecfc037e2"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.829636 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:22Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.864918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.864957 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.864969 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.864987 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.865000 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.967821 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.967865 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.967878 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.967895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:22 crc kubenswrapper[4988]: I1123 06:46:22.967908 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:22Z","lastTransitionTime":"2025-11-23T06:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.071171 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.071266 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.071285 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.071309 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.071328 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.174787 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.174855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.174881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.174913 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.174934 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.277895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.277946 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.277963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.277990 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.278008 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.381180 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.381617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.381770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.381953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.382146 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.485663 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.485704 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.485716 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.485731 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.485742 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.495057 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:23 crc kubenswrapper[4988]: E1123 06:46:23.495262 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.588130 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.588183 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.588247 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.588278 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.588301 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.691107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.691572 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.691721 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.691858 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.691975 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.796155 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.796242 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.796258 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.796283 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.796298 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.827309 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" event={"ID":"705d6107-c270-4f2f-9cc6-d5994972096d","Type":"ContainerStarted","Data":"dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.827373 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" event={"ID":"705d6107-c270-4f2f-9cc6-d5994972096d","Type":"ContainerStarted","Data":"cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.846943 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.865794 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.882415 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.896372 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.898573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.898924 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.899091 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.899511 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.899654 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:23Z","lastTransitionTime":"2025-11-23T06:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.913443 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.928953 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.940919 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.967071 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:23 crc kubenswrapper[4988]: I1123 06:46:23.997849 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.002497 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.002563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.002578 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.002601 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.002623 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.020776 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.038918 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.055794 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.072653 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.092903 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.105420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.105481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.105503 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.105541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.105565 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.106918 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.124815 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.140910 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:24Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.209254 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.209807 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.210001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.210224 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.210379 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.285957 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.286129 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.286095954 +0000 UTC m=+52.594608747 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.286561 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.286670 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.286769 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.286842 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287002 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287043 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287063 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287015 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.286971425 +0000 UTC m=+52.595484228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287147 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.287129969 +0000 UTC m=+52.595642762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.286863 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287226 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.28718375 +0000 UTC m=+52.595696553 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.286874 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287406 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287488 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287556 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.287670 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.287657772 +0000 UTC m=+52.596170755 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.314016 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.314081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.314095 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.314181 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.314228 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.417249 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.417562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.417636 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.417702 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.417761 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.495764 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.495893 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.495964 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.496015 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.496152 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.496450 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.520885 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.520943 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.520962 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.520987 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.521043 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.624088 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.624135 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.624148 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.624164 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.624178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.691729 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.692025 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: E1123 06:46:24.692169 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:46:40.692135442 +0000 UTC m=+53.000648395 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.727522 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.727571 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.727590 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.727612 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.727630 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.830265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.830339 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.830363 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.830393 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.830416 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.933680 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.933752 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.933770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.933797 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:24 crc kubenswrapper[4988]: I1123 06:46:24.933814 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:24Z","lastTransitionTime":"2025-11-23T06:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.037807 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.037881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.037899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.037924 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.037943 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.072514 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.072618 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.072641 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.072666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.072685 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.093969 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.099273 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.099337 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.099353 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.099757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.099813 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.157854 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.162437 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.162519 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.162543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.162574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.162600 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.181180 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.192898 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.193008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.193027 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.193051 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.193069 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.213247 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.218439 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.218509 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.218528 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.218578 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.218619 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.232910 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:25Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.233185 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.235105 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.235140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.235156 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.235174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.235189 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.337893 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.337950 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.337966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.337991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.338009 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.441585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.441639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.441656 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.441679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.441695 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.495721 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:25 crc kubenswrapper[4988]: E1123 06:46:25.495890 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.545420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.545466 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.545484 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.545508 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.545524 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.648068 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.648133 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.648151 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.648175 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.648242 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.750951 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.750991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.751001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.751020 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.751035 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.853745 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.853808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.853830 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.853859 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.853882 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.956506 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.956552 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.956568 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.956594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:25 crc kubenswrapper[4988]: I1123 06:46:25.956610 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:25Z","lastTransitionTime":"2025-11-23T06:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.059653 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.059711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.059732 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.059754 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.059775 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.162768 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.162842 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.162868 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.162897 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.162923 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.266283 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.266354 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.266393 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.266429 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.266452 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.369750 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.369818 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.369831 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.369851 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.369863 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.472642 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.472768 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.472797 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.472826 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.472848 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.495540 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.495579 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.495682 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:26 crc kubenswrapper[4988]: E1123 06:46:26.495936 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:26 crc kubenswrapper[4988]: E1123 06:46:26.496070 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:26 crc kubenswrapper[4988]: E1123 06:46:26.496124 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.575501 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.575550 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.575568 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.575631 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.575649 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.678399 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.678455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.678475 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.678497 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.678514 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.783498 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.783592 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.783617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.783649 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.783670 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.886895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.886951 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.886967 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.886991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.887008 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.990665 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.990721 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.990737 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.990764 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:26 crc kubenswrapper[4988]: I1123 06:46:26.990781 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:26Z","lastTransitionTime":"2025-11-23T06:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.093330 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.093366 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.093377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.093392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.093406 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.196287 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.196388 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.196407 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.196433 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.196454 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.299966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.300014 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.300031 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.300053 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.300068 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.404377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.404442 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.404460 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.404486 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.404505 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.496260 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:27 crc kubenswrapper[4988]: E1123 06:46:27.496455 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.508185 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.508244 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.508255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.508271 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.508285 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.611255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.611307 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.611324 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.611350 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.611368 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.713742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.714064 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.714240 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.714367 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.714473 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.817612 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.817685 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.817702 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.817726 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.817742 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.921607 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.921694 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.921727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.921759 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:27 crc kubenswrapper[4988]: I1123 06:46:27.921782 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:27Z","lastTransitionTime":"2025-11-23T06:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.025972 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.026060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.026085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.026117 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.026141 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.130054 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.130101 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.130118 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.130144 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.130164 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.233838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.233953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.233978 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.234010 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.234032 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.338040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.338126 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.338146 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.338177 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.338238 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.441968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.442065 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.442090 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.442118 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.442139 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.495234 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:28 crc kubenswrapper[4988]: E1123 06:46:28.495479 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.495551 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:28 crc kubenswrapper[4988]: E1123 06:46:28.495787 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.495832 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:28 crc kubenswrapper[4988]: E1123 06:46:28.495992 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.515266 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.546559 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.546620 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.546640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.546667 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.546686 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.552134 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.577338 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.602683 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.623081 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.647330 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.649023 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.649065 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.649085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.649110 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.649127 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.664715 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.685988 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.712738 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.743937 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.752977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.753069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.753107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.753152 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.753188 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.766957 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.783279 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.804527 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.840368 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.856037 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.856082 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.856098 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.856121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.856149 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.862172 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.881177 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.898293 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.959014 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.959060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.959088 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.959114 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:28 crc kubenswrapper[4988]: I1123 06:46:28.959131 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:28Z","lastTransitionTime":"2025-11-23T06:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.062425 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.062529 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.062548 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.062573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.062592 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.165063 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.165138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.165161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.165225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.165252 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.268574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.268639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.268656 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.268679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.268696 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.371157 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.371208 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.371221 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.371239 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.371252 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.474746 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.474811 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.474824 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.474848 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.474864 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.495400 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:29 crc kubenswrapper[4988]: E1123 06:46:29.495546 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.496370 4988 scope.go:117] "RemoveContainer" containerID="8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.578129 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.578234 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.578260 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.578286 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.578305 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.681587 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.681630 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.681642 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.681660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.681674 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.784979 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.785028 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.785040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.785060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.785074 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.887499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.887547 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.887556 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.887574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.887591 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.991577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.991652 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.991674 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.991705 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:29 crc kubenswrapper[4988]: I1123 06:46:29.991725 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:29Z","lastTransitionTime":"2025-11-23T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.094554 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.094623 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.094643 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.094664 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.094677 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.199120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.199205 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.199218 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.199241 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.199255 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.302674 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.302735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.302752 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.302777 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.302795 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.406736 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.406821 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.406847 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.406890 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.406918 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.495891 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.495982 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.495898 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:30 crc kubenswrapper[4988]: E1123 06:46:30.496165 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:30 crc kubenswrapper[4988]: E1123 06:46:30.496330 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:30 crc kubenswrapper[4988]: E1123 06:46:30.496462 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.510255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.510301 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.510319 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.510340 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.510356 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.614625 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.614715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.614735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.614760 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.614785 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.718682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.718791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.718804 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.718828 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.718842 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.822162 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.822293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.822318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.822348 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.822368 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.857952 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.860102 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.860493 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.885632 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.905593 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.925302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.925377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.925399 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.925447 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.925464 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:30Z","lastTransitionTime":"2025-11-23T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.926288 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.941891 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.957798 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.981840 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:30 crc kubenswrapper[4988]: I1123 06:46:30.999562 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.014947 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.028455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.028493 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.028504 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.028521 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.028534 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.035745 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.051774 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.065037 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.085827 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.109564 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.131026 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.131952 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.132014 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.132042 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.132072 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.132096 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.156279 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.182210 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.216736 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.239040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.239077 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.239088 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.239106 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.239117 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.341897 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.341958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.341974 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.341998 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.342015 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.445227 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.445293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.445320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.445353 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.445381 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.495986 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:31 crc kubenswrapper[4988]: E1123 06:46:31.496178 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.548330 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.548369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.548382 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.548401 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.548413 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.651585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.651643 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.651654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.651675 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.651691 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.755146 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.755221 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.755239 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.755263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.755280 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.858756 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.858836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.858862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.858890 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.858912 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.962255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.962316 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.962329 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.962347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:31 crc kubenswrapper[4988]: I1123 06:46:31.962360 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:31Z","lastTransitionTime":"2025-11-23T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.065676 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.065752 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.065775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.065808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.065830 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.169655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.169734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.169754 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.169793 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.169816 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.273604 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.273683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.273701 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.273727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.273749 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.377644 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.377735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.377763 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.377804 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.377833 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.481116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.481239 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.481260 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.481291 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.481312 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.495707 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.495789 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:32 crc kubenswrapper[4988]: E1123 06:46:32.495963 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.495993 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:32 crc kubenswrapper[4988]: E1123 06:46:32.496289 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:32 crc kubenswrapper[4988]: E1123 06:46:32.496600 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.500785 4988 scope.go:117] "RemoveContainer" containerID="fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.584373 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.584420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.584433 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.584455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.584472 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.688644 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.688705 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.688725 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.688757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.688782 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.792838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.792881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.792899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.792922 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.792939 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.879985 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/1.log" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.886375 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.887302 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.897437 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.897495 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.897516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.897541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.897559 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:32Z","lastTransitionTime":"2025-11-23T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.909849 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.928587 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.957248 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:32 crc kubenswrapper[4988]: I1123 06:46:32.983402 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:32Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.000865 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.000925 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.000941 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.000963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.000978 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.007892 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.031308 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.064811 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.084623 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.103303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.103608 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.103711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.103801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.103884 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.108499 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.126722 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.138495 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.159297 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.169701 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.180768 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.198338 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.206534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.206591 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.206609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.206635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.206652 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.212309 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.225020 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.308782 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.308827 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.308839 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.308856 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.308868 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.412049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.412588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.412741 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.412886 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.413015 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.495123 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:33 crc kubenswrapper[4988]: E1123 06:46:33.495371 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.516067 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.516125 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.516144 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.516169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.516188 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.619058 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.619120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.619137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.619161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.619184 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.721720 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.721803 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.721838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.721870 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.721891 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.825327 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.825409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.825432 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.825463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.825490 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.895362 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/2.log" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.896850 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/1.log" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.902251 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.902341 4988 scope.go:117] "RemoveContainer" containerID="fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.902179 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" exitCode=1 Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.906988 4988 scope.go:117] "RemoveContainer" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" Nov 23 06:46:33 crc kubenswrapper[4988]: E1123 06:46:33.907387 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.926315 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.930463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.930517 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.930535 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.930562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.930578 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:33Z","lastTransitionTime":"2025-11-23T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.946420 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:33 crc kubenswrapper[4988]: I1123 06:46:33.979777 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fffd0ade028432d3bc82862352de40311414006ec3364d86f22a09df023657a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:19Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:19Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:46:19.675588 6413 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"18746a4d-8a63-458a-b7e3-8fb89ff95fc0\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.005830 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.029850 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.033605 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.033670 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.033693 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.033722 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.033744 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.044500 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.060560 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.073486 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.088567 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.110131 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.132407 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.137461 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.137539 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.137563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.137595 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.137617 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.153427 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.174675 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.210112 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.231794 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.241302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.241387 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.241415 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.241481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.241509 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.252332 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.269169 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.345166 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.345230 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.345240 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.345255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.345264 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.448060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.448111 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.448123 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.448140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.448152 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.495691 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.495804 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:34 crc kubenswrapper[4988]: E1123 06:46:34.495902 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.495940 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:34 crc kubenswrapper[4988]: E1123 06:46:34.496142 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:34 crc kubenswrapper[4988]: E1123 06:46:34.496401 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.551796 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.551858 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.551899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.551926 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.551946 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.656573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.656657 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.656682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.656735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.656758 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.760492 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.760562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.760586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.760609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.760626 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.864686 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.864756 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.864779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.864808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.864831 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.909477 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/2.log" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.915367 4988 scope.go:117] "RemoveContainer" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" Nov 23 06:46:34 crc kubenswrapper[4988]: E1123 06:46:34.915613 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.935041 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.953394 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.968661 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.968726 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.968748 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.968775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.968798 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:34Z","lastTransitionTime":"2025-11-23T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:34 crc kubenswrapper[4988]: I1123 06:46:34.987591 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:34Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.006318 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.029907 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.051850 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.069120 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.071855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.071915 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.071939 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.072040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.072072 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.102839 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.121024 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.139752 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.161808 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.175386 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.175433 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.175450 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.175476 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.175497 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.180101 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.203076 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.227008 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.246291 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.267686 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.278621 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.278682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.278706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.278739 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.278761 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.289509 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.381592 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.381659 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.381681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.381709 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.381727 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.485257 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.485316 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.485334 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.485362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.485379 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.494131 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.494219 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.494247 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.494274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.494294 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.495487 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.495656 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.517348 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.523089 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.523169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.523221 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.523257 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.523282 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.546032 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.556798 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.556870 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.556889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.556923 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.556942 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.577236 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.582527 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.582574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.582588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.582609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.582625 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.599935 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.605148 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.605253 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.605272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.605302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.605321 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.624117 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:35Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:35 crc kubenswrapper[4988]: E1123 06:46:35.624415 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.626692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.626729 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.626739 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.626755 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.626768 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.729910 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.729967 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.729984 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.730008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.730025 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.832669 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.832744 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.832767 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.832794 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.832815 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.935921 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.935963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.935976 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.935992 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:35 crc kubenswrapper[4988]: I1123 06:46:35.936003 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:35Z","lastTransitionTime":"2025-11-23T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.039562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.039655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.039680 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.039713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.039737 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.142579 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.142654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.142679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.142711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.142734 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.246484 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.246560 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.246584 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.246613 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.246635 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.350382 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.350444 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.350465 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.350494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.350516 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.453377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.453436 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.453452 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.453476 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.453493 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.495484 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.495575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.495675 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:36 crc kubenswrapper[4988]: E1123 06:46:36.495677 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:36 crc kubenswrapper[4988]: E1123 06:46:36.495900 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:36 crc kubenswrapper[4988]: E1123 06:46:36.496054 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.557287 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.557445 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.557474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.557547 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.557569 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.660525 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.660588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.660611 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.660638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.660688 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.764175 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.764283 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.764302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.764326 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.764344 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.867863 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.867930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.867954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.867991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.868010 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.971318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.971428 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.971447 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.971472 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:36 crc kubenswrapper[4988]: I1123 06:46:36.971488 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:36Z","lastTransitionTime":"2025-11-23T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.074534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.074589 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.074609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.074632 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.074649 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.178149 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.178240 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.178260 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.178284 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.178302 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.281877 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.283492 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.283542 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.283570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.283589 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.386685 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.386734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.386743 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.386765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.386774 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.490578 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.490642 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.490660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.490690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.490713 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.495289 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:37 crc kubenswrapper[4988]: E1123 06:46:37.495443 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.600905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.601016 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.601049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.601081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.601107 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.706258 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.706381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.706404 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.706434 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.706455 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.810128 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.810189 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.810243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.810284 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.810319 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.913976 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.914050 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.914069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.914093 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:37 crc kubenswrapper[4988]: I1123 06:46:37.914113 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:37Z","lastTransitionTime":"2025-11-23T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.017441 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.017499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.017517 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.017541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.017559 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.120726 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.120806 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.120830 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.120855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.120873 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.223643 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.223698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.223715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.223740 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.223757 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.327153 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.327225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.327241 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.327264 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.327282 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.430243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.430312 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.430331 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.430357 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.430378 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.495384 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.495398 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.495551 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:38 crc kubenswrapper[4988]: E1123 06:46:38.495739 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:38 crc kubenswrapper[4988]: E1123 06:46:38.495900 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:38 crc kubenswrapper[4988]: E1123 06:46:38.496092 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.532439 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.533957 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.533992 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.534006 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.534027 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.534041 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.557677 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.579001 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.592715 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.604410 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.624388 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.637013 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.637100 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.637116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.637137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.637151 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.639147 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.653084 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.667135 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.684118 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.699133 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.717532 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740436 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740680 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740744 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740767 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740803 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.740826 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.760693 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.783724 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.805107 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.820975 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.843527 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.843580 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.843597 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.843621 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.843638 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.947879 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.947937 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.947954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.947977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:38 crc kubenswrapper[4988]: I1123 06:46:38.947993 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:38Z","lastTransitionTime":"2025-11-23T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.050683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.050723 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.050737 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.050753 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.050765 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.154654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.154718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.154739 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.154765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.154783 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.260867 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.260941 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.260964 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.260995 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.261019 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.363991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.364073 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.364086 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.364130 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.364148 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.468061 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.468125 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.468138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.468161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.468178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.495905 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:39 crc kubenswrapper[4988]: E1123 06:46:39.496434 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.571537 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.571598 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.571611 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.571638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.571657 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.675286 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.675369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.675392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.675421 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.675438 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.778934 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.779005 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.779026 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.779057 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.779076 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.883000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.883064 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.883082 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.883111 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.883128 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.985922 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.985982 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.986007 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.986034 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:39 crc kubenswrapper[4988]: I1123 06:46:39.986052 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:39Z","lastTransitionTime":"2025-11-23T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.089435 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.089507 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.089541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.089569 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.089593 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.192487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.192532 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.192544 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.192562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.192574 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.295977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.296028 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.296045 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.296068 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.296088 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.370708 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.370893 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.370865428 +0000 UTC m=+84.679378201 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.370952 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.371006 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.371051 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.371078 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371167 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371254 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371304 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371328 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371370 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371311 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.371275378 +0000 UTC m=+84.679788181 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371470 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.371444952 +0000 UTC m=+84.679957745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371504 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.371492803 +0000 UTC m=+84.680005606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371563 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371622 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371643 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.371744 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.371717079 +0000 UTC m=+84.680229882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.399344 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.399401 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.399419 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.399443 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.399464 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.495508 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.495559 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.495751 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.495894 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.496012 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.496148 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.503270 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.503324 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.503346 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.503375 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.503396 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.607159 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.607236 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.607259 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.607283 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.607299 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.710786 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.710834 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.710852 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.710876 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.710893 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.776789 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.776997 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: E1123 06:46:40.777087 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:47:12.77706283 +0000 UTC m=+85.085575623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.813890 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.813947 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.813963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.813986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.814004 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.917711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.917770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.917789 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.917815 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:40 crc kubenswrapper[4988]: I1123 06:46:40.917835 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:40Z","lastTransitionTime":"2025-11-23T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.020966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.021083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.021113 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.021174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.021220 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.124498 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.124585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.124609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.124640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.124663 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.228317 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.228442 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.228463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.228489 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.228506 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.332473 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.332535 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.332551 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.332579 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.332597 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.435796 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.435889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.435907 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.435964 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.435982 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.495896 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:41 crc kubenswrapper[4988]: E1123 06:46:41.496092 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.539237 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.539320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.539345 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.539377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.539400 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.642743 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.642817 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.642837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.642862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.642879 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.745230 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.745314 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.745352 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.745385 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.745408 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.848831 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.848881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.848899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.848923 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.848943 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.856808 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.872080 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.876355 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.910215 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.928234 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.949747 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.951362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.951504 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.951595 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.951627 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.951646 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:41Z","lastTransitionTime":"2025-11-23T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.965786 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:41 crc kubenswrapper[4988]: I1123 06:46:41.982558 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.013128 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.036309 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.055789 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.055852 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.055875 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.055905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.055928 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.056436 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.078342 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.093724 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.115927 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.129867 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.145891 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.160031 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.160086 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.160107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.160134 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.160156 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.164550 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.186219 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.203474 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.263243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.263301 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.263318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.263343 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.263361 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.366887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.367396 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.367577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.367718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.367849 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.471534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.471597 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.471617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.471640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.471657 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.495834 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:42 crc kubenswrapper[4988]: E1123 06:46:42.495989 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.496138 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:42 crc kubenswrapper[4988]: E1123 06:46:42.496375 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.496384 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:42 crc kubenswrapper[4988]: E1123 06:46:42.496531 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.574584 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.574683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.574706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.574735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.574757 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.678406 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.678472 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.678497 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.678520 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.678536 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.781753 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.781784 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.781795 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.781812 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.781823 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.885040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.885107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.885123 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.885567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.885603 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.989834 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.989921 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.989944 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.989972 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:42 crc kubenswrapper[4988]: I1123 06:46:42.989996 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:42Z","lastTransitionTime":"2025-11-23T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.092064 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.092125 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.092142 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.092169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.092186 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.175915 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.195908 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.195963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.195985 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.196017 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.196040 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.202745 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.222847 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.257765 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.281906 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.299143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.299236 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.299255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.299280 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.299298 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.305136 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.322468 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.342575 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.358475 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.375976 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.396167 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.406715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.406796 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.406823 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.406910 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.406938 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.416725 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.437345 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.455866 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.473034 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.495580 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:43 crc kubenswrapper[4988]: E1123 06:46:43.495905 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.504089 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.509912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.510236 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.510388 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.510531 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.510669 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.520156 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.538841 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.555253 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.613554 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.613854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.613999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.614143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.614391 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.719235 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.719369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.719391 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.719418 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.719437 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.822342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.822688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.822866 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.823047 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.823229 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.926853 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.926919 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.926938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.926965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:43 crc kubenswrapper[4988]: I1123 06:46:43.926984 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:43Z","lastTransitionTime":"2025-11-23T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.030036 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.030102 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.030134 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.030159 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.030178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.133592 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.134335 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.134360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.134376 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.134388 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.237791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.237851 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.237868 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.237929 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.237947 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.341620 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.342102 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.342121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.342145 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.342162 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.445268 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.445333 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.445350 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.445379 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.445397 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.496449 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.496481 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:44 crc kubenswrapper[4988]: E1123 06:46:44.496637 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.496699 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:44 crc kubenswrapper[4988]: E1123 06:46:44.497024 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:44 crc kubenswrapper[4988]: E1123 06:46:44.496908 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.549101 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.549146 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.549162 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.549186 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.549230 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.652725 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.652779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.652790 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.652809 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.652820 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.755986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.756051 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.756067 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.756094 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.756112 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.859838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.859916 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.859939 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.859966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.859985 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.962347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.962699 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.962767 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.962841 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:44 crc kubenswrapper[4988]: I1123 06:46:44.962910 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:44Z","lastTransitionTime":"2025-11-23T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.066794 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.067151 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.067369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.067569 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.067720 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.171272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.172286 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.172438 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.172604 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.172748 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.282428 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.282535 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.282556 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.282587 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.282609 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.386104 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.386172 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.386208 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.386231 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.386246 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.489724 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.490086 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.490260 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.490416 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.490537 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.495391 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.495674 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.593958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.594029 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.594052 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.594081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.594103 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.682677 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.682738 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.682755 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.682779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.682797 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.704759 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.710932 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.710984 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.711001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.711026 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.711044 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.731876 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.737886 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.737945 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.737965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.737989 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.738007 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.761372 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.767067 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.767178 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.767219 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.767249 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.767269 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.789969 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.794833 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.794927 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.794951 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.794977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.794995 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.813376 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:45 crc kubenswrapper[4988]: E1123 06:46:45.813604 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.816170 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.816262 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.816282 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.816308 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.816326 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.919376 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.919419 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.919435 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.919458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:45 crc kubenswrapper[4988]: I1123 06:46:45.919475 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:45Z","lastTransitionTime":"2025-11-23T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.022083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.022174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.022248 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.022277 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.022341 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.124519 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.124581 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.124604 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.124629 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.124649 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.227121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.227179 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.227248 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.227291 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.227369 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.330696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.330768 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.330787 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.330815 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.330836 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.433678 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.433734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.433750 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.433772 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.433792 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.496261 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.496350 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:46 crc kubenswrapper[4988]: E1123 06:46:46.496450 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.496480 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:46 crc kubenswrapper[4988]: E1123 06:46:46.496618 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:46 crc kubenswrapper[4988]: E1123 06:46:46.496791 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.538497 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.538554 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.538570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.538591 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.538607 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.642561 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.642635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.642655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.642682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.642708 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.746410 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.746481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.746499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.746522 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.746539 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.849399 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.849455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.849471 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.849494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.849510 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.952517 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.952577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.952593 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.952616 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:46 crc kubenswrapper[4988]: I1123 06:46:46.952632 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:46Z","lastTransitionTime":"2025-11-23T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.055098 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.055225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.055253 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.055288 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.055313 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.157927 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.158016 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.158037 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.158063 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.158082 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.260781 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.260861 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.260884 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.260912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.260934 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.363805 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.363873 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.363895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.363925 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.363951 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.467770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.467834 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.467850 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.467873 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.467891 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.496104 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:47 crc kubenswrapper[4988]: E1123 06:46:47.496929 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.497504 4988 scope.go:117] "RemoveContainer" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" Nov 23 06:46:47 crc kubenswrapper[4988]: E1123 06:46:47.497815 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.571257 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.571325 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.571342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.571366 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.571383 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.674048 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.674106 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.674123 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.674145 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.674163 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.777293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.777354 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.777371 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.777395 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.777412 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.880758 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.881253 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.881415 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.881554 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.881702 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.985279 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.985365 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.985390 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.985477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:47 crc kubenswrapper[4988]: I1123 06:46:47.985502 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:47Z","lastTransitionTime":"2025-11-23T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.089463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.089570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.089593 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.089661 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.089680 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.194274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.194344 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.194367 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.194398 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.194419 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.297838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.297902 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.297920 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.297949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.297973 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.400905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.400980 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.401000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.401028 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.401051 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.495510 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.495617 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:48 crc kubenswrapper[4988]: E1123 06:46:48.495694 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.495716 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:48 crc kubenswrapper[4988]: E1123 06:46:48.495835 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:48 crc kubenswrapper[4988]: E1123 06:46:48.495951 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.504338 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.504395 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.504415 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.504443 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.504464 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.519472 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.540884 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.559083 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.591175 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.610235 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.610293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.610314 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.610342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.610365 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.628501 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.655627 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.675793 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.695415 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.713853 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.713917 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.713935 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.713960 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.713978 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.716921 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.733469 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.753038 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.778253 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.797254 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.818244 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.818285 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.818325 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.818347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.818359 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.821076 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.836347 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.858152 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.875614 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.893740 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.922032 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.922079 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.922092 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.922114 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:48 crc kubenswrapper[4988]: I1123 06:46:48.922128 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:48Z","lastTransitionTime":"2025-11-23T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.025066 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.025121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.025138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.025164 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.025181 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.129072 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.129147 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.129198 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.129253 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.129274 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.232501 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.232546 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.232557 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.232574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.232586 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.335819 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.335864 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.335875 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.335893 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.335909 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.438817 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.438868 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.438881 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.438903 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.438915 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.495930 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:49 crc kubenswrapper[4988]: E1123 06:46:49.496063 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.541664 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.541713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.541725 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.541744 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.541756 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.644842 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.644986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.645066 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.645086 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.645178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.748241 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.748294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.748305 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.748320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.748329 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.851640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.851713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.851732 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.851758 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.851776 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.954813 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.954878 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.954894 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.954919 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:49 crc kubenswrapper[4988]: I1123 06:46:49.954937 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:49Z","lastTransitionTime":"2025-11-23T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.058076 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.058121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.058137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.058165 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.058181 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.161078 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.161140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.161159 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.161183 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.161232 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.264441 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.264513 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.264534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.264561 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.264577 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.368148 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.368205 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.368256 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.368279 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.368298 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.472487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.472562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.472587 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.472617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.472640 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.495469 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.495504 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.495596 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:50 crc kubenswrapper[4988]: E1123 06:46:50.495653 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:50 crc kubenswrapper[4988]: E1123 06:46:50.495802 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:50 crc kubenswrapper[4988]: E1123 06:46:50.495937 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.575835 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.575903 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.575921 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.575947 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.575966 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.678585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.678649 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.678666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.678690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.678708 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.782304 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.782375 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.782394 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.782418 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.782436 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.886503 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.886577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.886594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.886674 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.886704 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.990035 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.990109 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.990128 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.990151 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:50 crc kubenswrapper[4988]: I1123 06:46:50.990170 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:50Z","lastTransitionTime":"2025-11-23T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.094298 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.094385 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.094408 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.094446 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.094470 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.197740 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.197803 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.197819 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.197844 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.197860 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.301877 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.301960 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.301980 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.302003 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.302021 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.405657 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.405727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.405746 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.405775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.405792 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.495184 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:51 crc kubenswrapper[4988]: E1123 06:46:51.495481 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.508910 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.508999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.509025 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.509054 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.509072 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.612502 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.612557 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.612574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.612600 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.612619 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.716400 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.716494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.716515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.716542 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.716566 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.819040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.819140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.819157 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.819177 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.819190 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.922394 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.922441 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.922459 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.922484 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:51 crc kubenswrapper[4988]: I1123 06:46:51.922499 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:51Z","lastTransitionTime":"2025-11-23T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.025442 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.025495 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.025518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.025562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.025582 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.129369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.129476 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.129499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.129527 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.129549 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.232792 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.232854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.232883 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.232912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.232934 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.337031 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.337072 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.337083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.337099 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.337112 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.439456 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.439520 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.439541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.439565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.439585 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.496157 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:52 crc kubenswrapper[4988]: E1123 06:46:52.496386 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.496669 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:52 crc kubenswrapper[4988]: E1123 06:46:52.496764 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.496974 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:52 crc kubenswrapper[4988]: E1123 06:46:52.497071 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.543667 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.544000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.544018 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.544041 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.544066 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.647618 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.647692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.647710 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.647736 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.647754 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.750616 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.750696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.750721 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.750749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.750766 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.854394 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.854458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.854477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.854519 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.854541 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.956697 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.956781 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.956806 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.956836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:52 crc kubenswrapper[4988]: I1123 06:46:52.956860 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:52Z","lastTransitionTime":"2025-11-23T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.059410 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.059483 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.059506 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.059535 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.059556 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.162618 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.162692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.162728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.162745 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.162759 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.265362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.265403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.265415 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.265432 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.265445 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.367868 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.367919 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.367935 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.367954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.367969 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.470507 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.470543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.470553 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.470567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.470577 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.495768 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:53 crc kubenswrapper[4988]: E1123 06:46:53.495871 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.573835 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.573889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.573905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.573931 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.573951 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.676615 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.676682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.676702 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.676727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.676744 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.779233 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.779297 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.779315 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.779342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.779359 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.882138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.882222 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.882242 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.882265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.882282 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.984455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.984494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.984504 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.984519 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:53 crc kubenswrapper[4988]: I1123 06:46:53.984529 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:53Z","lastTransitionTime":"2025-11-23T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.088173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.088306 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.088330 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.088364 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.088389 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.191173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.191247 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.191261 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.191279 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.191291 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.294414 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.294450 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.294458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.294472 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.294483 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.396892 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.396930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.396939 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.396954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.396964 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.495831 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.495849 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:54 crc kubenswrapper[4988]: E1123 06:46:54.495948 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.496013 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:54 crc kubenswrapper[4988]: E1123 06:46:54.496123 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:54 crc kubenswrapper[4988]: E1123 06:46:54.496165 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.499965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.500019 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.500044 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.500071 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.500089 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.603323 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.603385 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.603403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.603429 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.603447 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.706287 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.706342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.706361 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.706383 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.706399 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.808870 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.808908 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.808916 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.808930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.808940 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.911717 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.911786 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.911808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.911836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:54 crc kubenswrapper[4988]: I1123 06:46:54.911857 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:54Z","lastTransitionTime":"2025-11-23T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.013673 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.013729 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.013746 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.013771 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.013788 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.117113 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.117172 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.117223 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.117255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.117305 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.219933 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.219983 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.220004 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.220029 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.220045 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.323295 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.323351 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.323368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.323392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.323409 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.426411 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.426439 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.426447 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.426462 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.426472 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.495876 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:55 crc kubenswrapper[4988]: E1123 06:46:55.495976 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.528494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.528537 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.528548 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.528564 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.528574 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.631425 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.631485 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.631494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.631509 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.631521 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.734316 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.734368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.734389 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.734412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.734429 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.836581 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.836619 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.836647 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.836664 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.836674 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.932320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.932356 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.932364 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.932377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.932385 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: E1123 06:46:55.950912 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.954889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.954918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.954927 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.954942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.954951 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: E1123 06:46:55.972030 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.975905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.975958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.975976 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.975999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.976016 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:55 crc kubenswrapper[4988]: E1123 06:46:55.995111 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.999109 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.999142 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.999151 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.999164 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:55 crc kubenswrapper[4988]: I1123 06:46:55.999175 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:55Z","lastTransitionTime":"2025-11-23T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.016579 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.022012 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.022050 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.022063 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.022080 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.022093 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.040814 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.040930 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.042765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.042798 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.042810 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.042825 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.042837 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.145351 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.145408 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.145424 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.145446 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.145465 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.248450 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.248477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.248487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.248499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.248509 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.350343 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.350397 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.350412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.350433 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.350448 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.453492 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.453543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.453558 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.453578 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.453593 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.495623 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.495671 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.495694 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.495746 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.495831 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:56 crc kubenswrapper[4988]: E1123 06:46:56.495926 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.555611 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.555685 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.555698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.555725 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.555739 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.659259 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.659321 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.659338 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.659367 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.659385 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.762197 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.762307 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.762318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.762334 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.762345 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.864800 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.864837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.864847 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.864861 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.864870 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.967915 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.967953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.967962 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.967977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.967988 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:56Z","lastTransitionTime":"2025-11-23T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.994567 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4p82c_0dde7218-bd4b-4585-b049-cb8db163fdac/kube-multus/0.log" Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.994634 4988 generic.go:334] "Generic (PLEG): container finished" podID="0dde7218-bd4b-4585-b049-cb8db163fdac" containerID="26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd" exitCode=1 Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.994680 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerDied","Data":"26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd"} Nov 23 06:46:56 crc kubenswrapper[4988]: I1123 06:46:56.995101 4988 scope.go:117] "RemoveContainer" containerID="26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.009316 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.025705 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.044745 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.063520 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.071461 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.071511 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.071524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.071543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.071554 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.081167 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.101282 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.114249 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.127082 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.148818 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.166138 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.173933 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.173973 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.173983 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.174000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.174015 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.178575 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.193226 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.215415 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.229934 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.257114 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.276103 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.277477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.277542 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.277575 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.277605 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.277628 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.295180 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.358228 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.381021 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.381061 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.381075 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.381095 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.381109 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.484058 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.484098 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.484116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.484131 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.484143 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.495348 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:57 crc kubenswrapper[4988]: E1123 06:46:57.495481 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.587431 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.588183 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.588278 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.588359 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.588421 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.691503 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.691562 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.691578 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.691603 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.691621 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.795462 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.795506 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.795516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.795535 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.795545 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.904596 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.904655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.904673 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.904698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:57 crc kubenswrapper[4988]: I1123 06:46:57.904715 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:57Z","lastTransitionTime":"2025-11-23T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.000985 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4p82c_0dde7218-bd4b-4585-b049-cb8db163fdac/kube-multus/0.log" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.001045 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerStarted","Data":"ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.006860 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.006931 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.006949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.006980 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.006996 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.018063 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.033812 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.050918 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.069141 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.085904 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.109749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.109786 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.109795 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.109813 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.109825 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.110755 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.128030 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.143170 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.155672 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.168954 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.182319 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.201762 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.213559 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.213613 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.213632 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.213657 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.213674 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.217073 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.233808 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.246919 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.262732 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.275958 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.289224 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.316261 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.316305 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.316322 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.316345 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.316362 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.419286 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.419335 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.419346 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.419362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.419374 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.495751 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.495818 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.495859 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:46:58 crc kubenswrapper[4988]: E1123 06:46:58.495895 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:46:58 crc kubenswrapper[4988]: E1123 06:46:58.495987 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:46:58 crc kubenswrapper[4988]: E1123 06:46:58.496104 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.496755 4988 scope.go:117] "RemoveContainer" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.516572 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.521886 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.521911 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.521923 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.521942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.521954 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.532598 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.546131 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.558335 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.570507 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.582289 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.609865 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.624932 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.624965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.624974 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.624988 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.624997 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.625309 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.638660 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.652713 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.664272 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.696408 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.711118 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.724219 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.735493 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.735518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.735526 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.735541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.735550 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.737748 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.747602 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.757943 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.794527 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:58Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.838004 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.838056 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.838069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.838084 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.838476 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.944893 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.944926 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.944938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.944953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:58 crc kubenswrapper[4988]: I1123 06:46:58.944963 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:58Z","lastTransitionTime":"2025-11-23T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.006899 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/2.log" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.009798 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.010265 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.029343 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.039898 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.047360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.047391 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.047403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.047420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.047432 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.051012 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.060377 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.071135 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.080154 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.098031 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.111823 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.125618 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.133637 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.144328 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.149520 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.149558 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.149567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.149581 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.149592 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.155143 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.167035 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.176682 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.191212 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.208094 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.222909 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.237514 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:46:59Z is after 2025-08-24T17:21:41Z" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.259852 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.259911 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.259922 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.259938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.259947 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.362161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.362225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.362235 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.362249 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.362260 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.464571 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.465057 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.465070 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.465091 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.465104 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.496069 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:46:59 crc kubenswrapper[4988]: E1123 06:46:59.496282 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.568619 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.568671 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.568686 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.568706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.568717 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.671337 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.671388 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.671399 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.671419 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.671430 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.774303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.774381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.774405 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.774436 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.774458 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.876953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.877011 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.877029 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.877054 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.877072 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.979727 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.979791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.979810 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.979837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:46:59 crc kubenswrapper[4988]: I1123 06:46:59.979855 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:46:59Z","lastTransitionTime":"2025-11-23T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.014813 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/3.log" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.015728 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/2.log" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.018225 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" exitCode=1 Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.018320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.018577 4988 scope.go:117] "RemoveContainer" containerID="ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.019487 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:47:00 crc kubenswrapper[4988]: E1123 06:47:00.019774 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.039987 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.060366 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.071490 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.082414 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.082447 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.082455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.082470 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.082480 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.086590 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.101346 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.115864 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.144404 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded0724272768fed2d925e6f86a20679cd502ac8fe48f5d72e27432070be11ab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:33Z\\\",\\\"message\\\":\\\"perator per-node LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488789 6652 services_controller.go:453] Built service openshift-machine-api/machine-api-operator template LB for network=default: []services.LB{}\\\\nI1123 06:46:33.488802 6652 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/packageserver-service]} name:Service_openshift-operator-lifecycle-manager/packageserver-service_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.153:5443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5e50827b-d271-442b-b8a7-7f33b2cd6b11}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1123 06:46:33.488861 6652 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:59Z\\\",\\\"message\\\":\\\"rator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1123 06:46:59.356234 6982 services_controller.go:360] Finished syncing service m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.166086 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.184133 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.186443 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.186518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.186541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.186572 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.186596 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.197338 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.213901 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.226697 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.242622 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.254593 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.268829 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.287136 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.288728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.288771 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.288780 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.288795 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.288805 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.305129 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.318284 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:00Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.391957 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.392039 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.392069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.392099 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.392125 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.494692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.494737 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.494750 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.494766 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.494778 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.495551 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.495642 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.495795 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:00 crc kubenswrapper[4988]: E1123 06:47:00.495864 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:00 crc kubenswrapper[4988]: E1123 06:47:00.496177 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:00 crc kubenswrapper[4988]: E1123 06:47:00.496496 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.598294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.598364 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.598384 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.598406 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.598422 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.701979 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.702041 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.702058 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.702084 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.702101 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.805268 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.805328 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.805347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.805378 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.805398 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.907815 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.908146 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.908268 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.908411 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:00 crc kubenswrapper[4988]: I1123 06:47:00.908497 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:00Z","lastTransitionTime":"2025-11-23T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.012099 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.012167 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.012188 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.012251 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.012274 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.023670 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/3.log" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.027681 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:47:01 crc kubenswrapper[4988]: E1123 06:47:01.027871 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.048369 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.065394 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.083486 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.100743 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.115581 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.115623 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.115635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.115655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.115670 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.119597 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.153217 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.169251 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.186699 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.200675 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.216800 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.219160 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.219249 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.219264 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.219286 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.219402 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.231892 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.261622 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:59Z\\\",\\\"message\\\":\\\"rator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1123 06:46:59.356234 6982 services_controller.go:360] Finished syncing service m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.282445 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.303936 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.320448 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.322141 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.322186 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.322231 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.322259 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.322280 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.345672 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.364002 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.381949 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:01Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.424300 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.424340 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.424352 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.424369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.424382 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.496052 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:01 crc kubenswrapper[4988]: E1123 06:47:01.496184 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.526800 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.526855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.526864 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.526880 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.526893 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.630510 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.630548 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.630556 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.630569 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.630579 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.733424 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.733491 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.733516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.733545 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.733566 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.836105 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.836147 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.836156 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.836170 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.836180 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.938621 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.938662 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.938673 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.938690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:01 crc kubenswrapper[4988]: I1123 06:47:01.938701 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:01Z","lastTransitionTime":"2025-11-23T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.041571 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.041619 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.041635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.041654 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.041667 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.144531 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.144568 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.144577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.144590 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.144600 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.247384 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.247447 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.247463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.247490 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.247508 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.350949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.351020 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.351041 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.351071 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.351096 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.454038 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.454107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.454120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.454136 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.454149 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.495959 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.495986 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:02 crc kubenswrapper[4988]: E1123 06:47:02.496155 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.496291 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:02 crc kubenswrapper[4988]: E1123 06:47:02.496467 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:02 crc kubenswrapper[4988]: E1123 06:47:02.496634 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.555908 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.555952 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.555963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.555977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.555988 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.659310 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.659376 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.659403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.659434 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.659458 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.762631 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.762677 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.762689 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.762705 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.762717 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.865153 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.865184 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.865217 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.865232 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.865243 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.967779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.967825 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.967837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.967853 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:02 crc kubenswrapper[4988]: I1123 06:47:02.967864 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:02Z","lastTransitionTime":"2025-11-23T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.070710 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.070783 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.070808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.070836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.070854 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.173824 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.173871 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.173889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.173911 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.173927 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.276533 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.276602 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.276618 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.276640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.276659 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.379581 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.379637 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.379652 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.379672 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.379687 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.482276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.482340 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.482362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.482391 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.482414 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.495709 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:03 crc kubenswrapper[4988]: E1123 06:47:03.495903 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.585112 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.585226 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.585246 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.585269 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.585285 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.688012 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.688067 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.688083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.688104 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.688123 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.790801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.790866 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.790883 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.790907 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.790924 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.893584 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.893648 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.893666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.893690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.893708 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.997481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.997521 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.997530 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.997546 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:03 crc kubenswrapper[4988]: I1123 06:47:03.997558 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:03Z","lastTransitionTime":"2025-11-23T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.099722 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.099792 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.099809 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.099831 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.099848 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.202226 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.202272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.202294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.202324 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.202346 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.304919 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.304975 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.304993 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.305017 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.305034 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.408173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.408291 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.408317 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.408348 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.408374 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.495485 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.495497 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:04 crc kubenswrapper[4988]: E1123 06:47:04.495722 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.495782 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:04 crc kubenswrapper[4988]: E1123 06:47:04.495944 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:04 crc kubenswrapper[4988]: E1123 06:47:04.496059 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.510719 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.510793 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.510816 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.510839 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.510856 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.613726 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.613772 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.613788 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.613810 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.613829 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.716802 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.716867 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.716885 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.716907 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.716923 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.819579 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.819650 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.819668 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.819692 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.819709 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.922739 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.922793 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.922806 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.922829 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:04 crc kubenswrapper[4988]: I1123 06:47:04.922843 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:04Z","lastTransitionTime":"2025-11-23T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.025943 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.026019 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.026034 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.026057 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.026072 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.129463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.129543 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.129561 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.129586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.129603 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.232184 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.232282 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.232301 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.232323 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.232339 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.336707 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.336796 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.336813 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.336837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.336853 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.440083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.440169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.440224 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.440259 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.440284 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.495424 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:05 crc kubenswrapper[4988]: E1123 06:47:05.495685 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.543736 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.543801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.543824 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.543856 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.543879 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.647296 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.647391 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.647409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.647466 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.647483 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.750121 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.750225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.750246 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.750283 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.750305 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.853647 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.853736 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.853761 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.853794 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.853819 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.958403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.958488 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.958508 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.958538 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:05 crc kubenswrapper[4988]: I1123 06:47:05.958559 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:05Z","lastTransitionTime":"2025-11-23T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.063243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.063321 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.063341 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.063374 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.063393 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.134846 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.134906 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.134924 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.134954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.134971 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.152572 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:06Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.157783 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.157867 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.157887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.157942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.157960 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.178539 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:06Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.183744 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.183806 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.183825 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.183850 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.183868 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.203587 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:06Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.208706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.208767 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.208791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.208819 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.208839 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.229557 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:06Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.234534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.234594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.234615 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.234645 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.234668 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.255317 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:06Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.255567 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.258897 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.258962 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.258987 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.259026 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.259048 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.362504 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.362583 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.362608 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.362642 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.362670 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.466272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.466349 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.466368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.466395 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.466413 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.495985 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.496057 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.496157 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.496264 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.496532 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:06 crc kubenswrapper[4988]: E1123 06:47:06.496884 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.515864 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.569496 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.569577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.569596 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.569621 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.569640 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.672944 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.673063 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.673094 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.673126 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.673146 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.776996 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.777069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.777092 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.777127 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.777153 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.881000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.881081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.881106 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.881137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.881159 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.983914 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.983986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.984008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.984036 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:06 crc kubenswrapper[4988]: I1123 06:47:06.984057 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:06Z","lastTransitionTime":"2025-11-23T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.087975 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.088085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.088105 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.088128 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.088146 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.191113 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.191180 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.191284 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.191311 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.191331 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.294938 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.294991 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.295008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.295030 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.295047 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.398490 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.398563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.398582 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.398610 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.398629 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.495987 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:07 crc kubenswrapper[4988]: E1123 06:47:07.496246 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.501187 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.501284 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.501302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.501326 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.501349 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.604100 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.604147 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.604157 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.604172 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.604181 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.707060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.707115 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.707131 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.707156 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.707268 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.810733 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.810801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.810816 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.810835 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.810850 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.914532 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.914575 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.914592 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.914615 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:07 crc kubenswrapper[4988]: I1123 06:47:07.914633 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:07Z","lastTransitionTime":"2025-11-23T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.018158 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.018255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.018273 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.018297 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.018317 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.121480 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.121552 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.121573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.121600 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.121618 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.224914 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.224976 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.224993 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.225019 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.225037 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.328008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.328068 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.328087 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.328116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.328140 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.430941 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.430993 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.431009 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.431033 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.431048 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.495409 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:08 crc kubenswrapper[4988]: E1123 06:47:08.495603 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.495749 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:08 crc kubenswrapper[4988]: E1123 06:47:08.495875 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.495743 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:08 crc kubenswrapper[4988]: E1123 06:47:08.495995 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.515185 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef3c0e3-61b0-4f5a-8c57-bbc366e8c889\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b182d4cdc5c7a501ed4181a04e799c276f2dd1da1b45b213bd34aa6ed03dce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.537812 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.537904 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.537930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.537968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.537992 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.551365 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.574363 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.598691 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.616582 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.636583 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.640842 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.640898 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.640916 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.640945 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.640962 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.652128 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.693690 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:59Z\\\",\\\"message\\\":\\\"rator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1123 06:46:59.356234 6982 services_controller.go:360] Finished syncing service m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.730337 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.743397 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.743455 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.743473 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.743500 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.743520 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.750674 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.767372 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.788876 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.801715 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.818452 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.833746 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.846349 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.846407 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.846425 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.846449 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.846465 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.847886 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.862808 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.876454 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.892880 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:08Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.949948 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.950034 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.950060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.950096 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:08 crc kubenswrapper[4988]: I1123 06:47:08.950119 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:08Z","lastTransitionTime":"2025-11-23T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.052624 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.052681 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.052697 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.052717 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.052732 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.155162 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.155263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.155295 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.155326 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.155345 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.257981 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.258055 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.258078 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.258112 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.258138 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.361362 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.361528 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.361557 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.361585 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.361607 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.467513 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.467601 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.467624 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.467661 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.467692 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.495135 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:09 crc kubenswrapper[4988]: E1123 06:47:09.495361 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.571572 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.571629 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.571647 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.571673 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.571703 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.673778 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.673838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.673852 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.673870 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.673882 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.777107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.777156 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.777177 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.777225 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.777243 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.880034 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.880120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.880143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.880174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.880227 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.983508 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.983566 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.983582 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.983606 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:09 crc kubenswrapper[4988]: I1123 06:47:09.983623 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:09Z","lastTransitionTime":"2025-11-23T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.087106 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.087168 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.087185 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.087241 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.087258 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.190513 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.190570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.190586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.190609 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.190627 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.295096 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.295143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.295154 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.295171 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.295183 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.397965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.398035 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.398057 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.398077 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.398091 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.495734 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.495793 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.495753 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:10 crc kubenswrapper[4988]: E1123 06:47:10.495954 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:10 crc kubenswrapper[4988]: E1123 06:47:10.496077 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:10 crc kubenswrapper[4988]: E1123 06:47:10.496225 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.502064 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.502470 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.502580 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.502623 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.502669 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.607150 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.607263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.607289 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.607318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.607336 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.710607 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.710666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.710684 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.710711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.710730 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.813255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.813303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.813321 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.813343 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.813359 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.916051 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.916246 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.916276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.916302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:10 crc kubenswrapper[4988]: I1123 06:47:10.916318 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:10Z","lastTransitionTime":"2025-11-23T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.019147 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.019255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.019274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.019301 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.019320 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.121917 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.121977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.121993 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.122017 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.122035 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.225264 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.225324 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.225341 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.225364 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.225383 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.328905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.328968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.328986 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.329040 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.329057 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.431655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.431791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.431811 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.431887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.431916 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.495045 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:11 crc kubenswrapper[4988]: E1123 06:47:11.495178 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.534644 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.534687 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.534698 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.534715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.534726 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.638113 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.638175 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.638229 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.638261 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.638281 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.741661 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.741729 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.741749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.741814 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.741835 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.845174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.845265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.845281 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.845306 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.845326 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.948716 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.948796 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.948818 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.948851 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:11 crc kubenswrapper[4988]: I1123 06:47:11.948874 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:11Z","lastTransitionTime":"2025-11-23T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.051839 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.051929 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.051954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.051985 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.052007 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.155046 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.155103 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.155120 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.155144 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.155164 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.259019 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.259073 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.259089 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.259119 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.259138 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.362600 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.362663 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.362683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.362709 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.362727 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.448861 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.448991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449104 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.449054458 +0000 UTC m=+148.757567261 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449139 4988 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.449226 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449263 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.449237923 +0000 UTC m=+148.757750716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.449330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.449387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449408 4988 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449508 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449515 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.449486399 +0000 UTC m=+148.757999252 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449731 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449760 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449802 4988 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449827 4988 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449768 4988 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449902 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.449876719 +0000 UTC m=+148.758389522 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.449937 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.44991797 +0000 UTC m=+148.758430763 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.466381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.466658 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.466716 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.466748 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.466777 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.496085 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.496170 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.496180 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.496314 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.496450 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.496621 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.570003 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.570053 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.570069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.570093 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.570111 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.673463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.673509 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.673524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.673548 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.673564 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.778169 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.778245 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.778291 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.778317 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.778335 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.854011 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.854279 4988 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: E1123 06:47:12.854696 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs podName:1a94eb06-d03a-43c9-8004-73d48280435f nodeName:}" failed. No retries permitted until 2025-11-23 06:48:16.854670256 +0000 UTC m=+149.163183049 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs") pod "network-metrics-daemon-l5wgs" (UID: "1a94eb06-d03a-43c9-8004-73d48280435f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.881276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.881349 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.881372 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.881403 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.881423 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.984648 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.984700 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.984728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.984846 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:12 crc kubenswrapper[4988]: I1123 06:47:12.984861 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:12Z","lastTransitionTime":"2025-11-23T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.090524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.090603 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.090621 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.090645 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.090665 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.193335 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.193377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.193387 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.193401 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.193411 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.296866 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.296953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.296978 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.297013 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.297037 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.400532 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.400635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.400660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.400687 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.400709 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.495126 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:13 crc kubenswrapper[4988]: E1123 06:47:13.495339 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.496520 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:47:13 crc kubenswrapper[4988]: E1123 06:47:13.496991 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.503298 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.503350 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.503369 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.503390 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.503407 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.606787 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.606857 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.606882 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.606910 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.606930 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.709770 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.709837 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.709854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.709877 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.709894 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.813857 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.813916 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.813933 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.813961 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.813979 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.916949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.917000 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.917018 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.917039 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:13 crc kubenswrapper[4988]: I1123 06:47:13.917056 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:13Z","lastTransitionTime":"2025-11-23T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.019005 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.019222 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.019252 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.019281 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.019301 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.122536 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.122605 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.122625 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.122653 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.122671 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.225921 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.225983 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.226003 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.226029 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.226047 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.329378 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.329458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.329476 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.329507 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.329528 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.433185 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.433282 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.433302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.433329 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.433346 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.495495 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.495563 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:14 crc kubenswrapper[4988]: E1123 06:47:14.495754 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.495795 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:14 crc kubenswrapper[4988]: E1123 06:47:14.495961 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:14 crc kubenswrapper[4988]: E1123 06:47:14.496460 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.536808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.536866 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.536882 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.536905 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.536923 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.640660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.640714 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.640732 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.640760 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.640777 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.744190 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.745312 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.745354 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.745384 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.745403 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.848848 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.848944 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.848968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.848997 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.849019 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.952602 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.952684 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.952707 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.952741 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:14 crc kubenswrapper[4988]: I1123 06:47:14.952868 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:14Z","lastTransitionTime":"2025-11-23T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.056536 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.056788 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.056872 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.056899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.056917 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.160515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.160580 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.160597 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.160622 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.160642 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.264330 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.264399 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.264409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.264424 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.264435 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.367408 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.367477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.367487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.367503 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.367514 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.470531 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.470573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.470586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.470640 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.470653 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.495344 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:15 crc kubenswrapper[4988]: E1123 06:47:15.495468 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.573494 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.573554 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.573570 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.573594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.573611 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.676435 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.676498 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.676518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.676546 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.676564 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.779584 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.779637 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.779661 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.779688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.779709 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.883049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.883117 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.883134 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.883158 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.883178 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.986343 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.986402 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.986420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.986462 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:15 crc kubenswrapper[4988]: I1123 06:47:15.986481 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:15Z","lastTransitionTime":"2025-11-23T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.089588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.089650 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.089668 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.089690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.089711 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.192556 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.192602 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.192616 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.192632 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.192644 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.295639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.295700 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.295718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.295742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.295763 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.399586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.399642 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.399658 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.399686 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.399699 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.496169 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.496266 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.496349 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.496412 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.496490 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.496701 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.502042 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.502110 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.502134 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.502168 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.502189 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.569763 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.569830 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.569854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.569880 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.569902 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.591809 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.597479 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.597550 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.597569 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.597594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.597612 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.617888 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.623107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.623162 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.623178 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.623226 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.623247 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.643351 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.647976 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.648030 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.648048 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.648071 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.648090 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.667954 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.673705 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.673749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.673763 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.673784 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.673796 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.693529 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:16Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:16 crc kubenswrapper[4988]: E1123 06:47:16.693776 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.695701 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.695768 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.695790 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.695825 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.695850 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.799728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.799809 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.799828 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.799855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.799881 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.903069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.903127 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.903137 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.903155 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:16 crc kubenswrapper[4988]: I1123 06:47:16.903167 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:16Z","lastTransitionTime":"2025-11-23T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.006666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.006738 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.006756 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.006780 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.006797 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.115173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.115313 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.115355 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.115393 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.115420 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.219376 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.219458 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.219484 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.219514 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.219548 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.322803 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.322879 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.322900 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.322925 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.322943 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.426116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.426222 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.426244 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.426268 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.426285 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.496077 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:17 crc kubenswrapper[4988]: E1123 06:47:17.496321 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.529431 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.529474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.529491 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.529512 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.529528 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.632402 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.632469 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.632488 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.632514 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.632532 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.735303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.735370 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.735388 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.735410 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.735428 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.837931 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.838001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.838020 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.838045 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.838062 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.941048 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.941118 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.941138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.941160 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:17 crc kubenswrapper[4988]: I1123 06:47:17.941177 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:17Z","lastTransitionTime":"2025-11-23T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.045115 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.045185 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.045259 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.045294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.045316 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.147999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.148062 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.148084 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.148112 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.148134 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.251720 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.251785 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.251801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.251828 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.251845 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.354619 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.354676 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.354693 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.354717 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.354734 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.457024 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.457070 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.457083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.457103 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.457115 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.496060 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.496131 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.496085 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:18 crc kubenswrapper[4988]: E1123 06:47:18.496303 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:18 crc kubenswrapper[4988]: E1123 06:47:18.496482 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:18 crc kubenswrapper[4988]: E1123 06:47:18.496905 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.516866 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.533895 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.560221 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.560275 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.560293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.560318 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.560338 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.567057 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:59Z\\\",\\\"message\\\":\\\"rator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1123 06:46:59.356234 6982 services_controller.go:360] Finished syncing service m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.593164 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.610768 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.630353 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.649137 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.664317 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.664361 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.664372 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.664391 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.664404 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.670705 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.688351 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.707719 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.727849 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.746170 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.767716 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.768237 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.768423 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.769010 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.769343 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.774137 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.795872 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.812763 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.828360 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef3c0e3-61b0-4f5a-8c57-bbc366e8c889\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b182d4cdc5c7a501ed4181a04e799c276f2dd1da1b45b213bd34aa6ed03dce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.861237 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.872495 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.872541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.872552 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.872568 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.872579 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.882803 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.906362 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:18Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.975541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.975601 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.975656 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.975687 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:18 crc kubenswrapper[4988]: I1123 06:47:18.975710 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:18Z","lastTransitionTime":"2025-11-23T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.077801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.077865 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.077889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.077922 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.077944 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.180474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.180547 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.180565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.180594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.180615 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.283878 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.283984 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.284009 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.284032 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.284049 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.387346 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.387407 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.387423 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.387446 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.387466 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.490931 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.490998 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.491023 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.491052 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.491073 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.495478 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:19 crc kubenswrapper[4988]: E1123 06:47:19.495645 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.594071 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.594130 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.594145 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.594168 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.594182 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.697444 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.697499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.697513 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.697536 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.697551 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.800827 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.800884 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.800967 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.801032 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.801052 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.904181 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.904278 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.904295 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.904320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:19 crc kubenswrapper[4988]: I1123 06:47:19.904338 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:19Z","lastTransitionTime":"2025-11-23T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.007577 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.007641 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.007662 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.007693 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.007718 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.110488 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.110553 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.110573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.110598 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.110617 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.213573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.213638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.213658 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.213682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.213703 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.317436 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.317507 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.317533 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.317565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.317587 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.419942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.419989 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.419998 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.420013 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.420021 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.495295 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.495295 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.495417 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:20 crc kubenswrapper[4988]: E1123 06:47:20.495533 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:20 crc kubenswrapper[4988]: E1123 06:47:20.495698 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:20 crc kubenswrapper[4988]: E1123 06:47:20.495769 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.522030 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.522082 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.522100 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.522125 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.522142 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.624918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.624966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.625004 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.625027 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.625043 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.728594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.728655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.728672 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.728697 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.728715 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.832440 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.832501 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.832521 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.832546 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.832572 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.937064 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.937130 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.937147 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.937173 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:20 crc kubenswrapper[4988]: I1123 06:47:20.937224 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:20Z","lastTransitionTime":"2025-11-23T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.039559 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.039629 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.039641 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.039660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.039673 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.142172 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.142274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.142294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.142321 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.142339 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.245395 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.245470 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.245493 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.245522 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.245543 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.349081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.349134 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.349150 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.349172 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.349191 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.452594 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.452688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.452712 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.452740 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.452763 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.495096 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:21 crc kubenswrapper[4988]: E1123 06:47:21.495300 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.555720 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.555765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.555786 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.555813 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.555834 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.658383 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.658452 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.658477 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.658504 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.658524 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.761347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.761389 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.761401 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.761418 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.761434 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.864060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.864125 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.864144 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.864167 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.864321 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.967887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.967953 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.967971 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.967995 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:21 crc kubenswrapper[4988]: I1123 06:47:21.968033 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:21Z","lastTransitionTime":"2025-11-23T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.070903 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.070983 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.071003 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.071026 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.071045 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.174102 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.174174 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.174249 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.174276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.174294 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.277775 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.277821 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.277835 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.277865 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.277877 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.381059 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.381107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.381124 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.381149 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.381171 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.484667 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.484728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.484742 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.484759 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.484770 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.495293 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.495305 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.495362 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:22 crc kubenswrapper[4988]: E1123 06:47:22.495408 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:22 crc kubenswrapper[4988]: E1123 06:47:22.495544 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:22 crc kubenswrapper[4988]: E1123 06:47:22.495645 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.587912 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.588226 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.588313 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.588434 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.588528 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.691328 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.691370 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.691381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.691400 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.691435 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.793531 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.793765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.793785 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.793810 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.793830 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.897116 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.897176 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.897220 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.897243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:22 crc kubenswrapper[4988]: I1123 06:47:22.897259 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:22Z","lastTransitionTime":"2025-11-23T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:22.999982 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.000483 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.000633 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.000772 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.000924 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.103453 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.103498 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.103512 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.103531 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.103545 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.206559 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.206610 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.206627 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.206652 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.206668 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.310423 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.310480 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.310498 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.310525 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.310543 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.413011 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.413088 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.413115 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.413143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.413166 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.495090 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:23 crc kubenswrapper[4988]: E1123 06:47:23.495809 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.515754 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.515799 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.515817 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.515839 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.515861 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.619314 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.619379 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.619404 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.619431 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.619452 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.722930 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.722992 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.723008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.723032 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.723049 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.825387 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.825436 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.825452 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.825475 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.825493 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.928734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.928798 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.928817 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.928840 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:23 crc kubenswrapper[4988]: I1123 06:47:23.928857 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:23Z","lastTransitionTime":"2025-11-23T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.031945 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.032066 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.032085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.032107 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.032122 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.135515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.135575 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.135592 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.135615 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.135633 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.238481 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.238620 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.238639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.238664 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.238684 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.342534 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.342611 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.342635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.342666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.342690 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.445830 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.445937 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.445963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.445995 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.446050 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.495736 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.495806 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.495825 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:24 crc kubenswrapper[4988]: E1123 06:47:24.495959 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:24 crc kubenswrapper[4988]: E1123 06:47:24.496097 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:24 crc kubenswrapper[4988]: E1123 06:47:24.496234 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.549778 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.549860 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.549887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.549923 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.549948 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.653092 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.653188 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.653255 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.653289 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.653312 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.755809 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.755862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.755874 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.755894 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.755912 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.858955 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.858999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.859010 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.859027 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.859039 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.961632 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.961690 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.961707 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.961733 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:24 crc kubenswrapper[4988]: I1123 06:47:24.961751 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:24Z","lastTransitionTime":"2025-11-23T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.064411 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.064472 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.064491 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.064515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.064532 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.167271 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.167347 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.167371 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.167408 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.167432 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.270973 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.271028 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.271045 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.271069 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.271090 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.374641 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.374701 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.374718 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.374745 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.374762 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.478312 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.478378 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.478397 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.478422 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.478441 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.495959 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:25 crc kubenswrapper[4988]: E1123 06:47:25.496157 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.581466 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.581663 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.581683 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.581708 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.581725 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.690243 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.690333 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.690352 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.690377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.690404 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.794299 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.794356 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.794373 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.794396 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.794413 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.897338 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.897398 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.897421 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.897449 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:25 crc kubenswrapper[4988]: I1123 06:47:25.897472 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:25Z","lastTransitionTime":"2025-11-23T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.000811 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.000857 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.000874 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.000896 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.000913 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.104298 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.104374 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.104398 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.104435 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.104458 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.207516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.207586 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.207635 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.207665 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.207686 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.310493 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.310549 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.310565 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.310589 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.310604 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.413298 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.413371 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.413393 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.413422 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.413443 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.495844 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.495907 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.496325 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.496444 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.496622 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.496747 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.516713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.516765 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.516787 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.516815 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.516839 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.619446 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.619507 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.619523 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.619547 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.619564 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.723039 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.723098 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.723115 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.723140 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.723158 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.826001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.826071 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.826096 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.826129 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.826152 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.828412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.828470 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.828493 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.828518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.828539 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.845792 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.850614 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.850676 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.850691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.850715 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.850729 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.870079 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.875614 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.875670 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.875686 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.875711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.875731 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.896869 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.902236 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.902275 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.902289 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.902308 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.902322 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.923054 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.927689 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.927749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.927762 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.927778 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.927792 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.947095 4988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5b1a733f-d503-4ccc-a8b3-ea8f5f96dc6a\\\",\\\"systemUUID\\\":\\\"e9018f38-2998-476c-ae0b-0f99d72a3f69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:26Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:26 crc kubenswrapper[4988]: E1123 06:47:26.947446 4988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.949265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.949323 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.949342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.949368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:26 crc kubenswrapper[4988]: I1123 06:47:26.949389 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:26Z","lastTransitionTime":"2025-11-23T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.052168 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.052272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.052292 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.052320 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.052337 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.155699 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.155759 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.155776 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.155801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.155819 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.259065 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.259141 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.259165 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.259232 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.259259 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.362660 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.362745 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.362769 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.362801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.362827 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.466274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.466338 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.466355 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.466411 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.466450 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.495785 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:27 crc kubenswrapper[4988]: E1123 06:47:27.496003 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.497060 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:47:27 crc kubenswrapper[4988]: E1123 06:47:27.497375 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.569228 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.569281 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.569297 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.569321 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.569338 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.672678 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.672748 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.672767 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.672791 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.672809 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.776143 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.776235 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.776263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.776294 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.776317 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.879510 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.879561 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.879580 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.879603 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.879620 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.983334 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.983405 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.983423 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.983448 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:27 crc kubenswrapper[4988]: I1123 06:47:27.983466 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:27Z","lastTransitionTime":"2025-11-23T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.085958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.086035 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.086054 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.086083 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.086107 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.188230 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.188289 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.188307 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.188331 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.188349 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.290747 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.290808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.290824 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.290871 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.290889 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.393918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.394018 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.394036 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.394065 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.394083 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.495710 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.495980 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.495997 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:28 crc kubenswrapper[4988]: E1123 06:47:28.496272 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:28 crc kubenswrapper[4988]: E1123 06:47:28.496592 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:28 crc kubenswrapper[4988]: E1123 06:47:28.496868 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.498541 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.498863 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.498919 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.498952 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.498975 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.516956 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.535138 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a94eb06-d03a-43c9-8004-73d48280435f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kfnn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l5wgs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.552716 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef3c0e3-61b0-4f5a-8c57-bbc366e8c889\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b182d4cdc5c7a501ed4181a04e799c276f2dd1da1b45b213bd34aa6ed03dce0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20275a7ffdba4ccb884e0c194bd22b37ec139c6ef9fafb02bc3cc1051b9af910\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.585842 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e126d87d-34e7-46bb-9ff6-3cf86bf2d09c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db898a724c21d783a0a07d840fc987118e932c809522e11b23c67b632e6e151c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a3bb53ce871eba27d714d0c5fde6e9a765070fc9a7746d48eec4d8437ec48e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77bbaf8a321df2cb319ba3a4a0437ccaa8b034b9f494a63aceb8787935634cf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f4a34585cd1cbfb434bae3ddf69a3ae1d6ba5dbe5af2c81d64fa2c86fadc12c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48784d7c92d9f27724fb89a69f77f33f5159c9f332991b30f63e00bba4d79e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b34d08af82add246cac41457bc3973dd138422471c3376962e0ccbef668b7f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce6c7d7b0340ccb5a83dae7d8ba036b67e9aa0c21485b21cba40d42b0e3d0a49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe98cbd8c8589b2f5c615bbd3835b9c257ef28a1f0b72dcca38b9a0407a39975\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.602653 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.602713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.602735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.602764 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.602785 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.606463 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df54de7fe54358fb4ab4aa87017ff0e1e36a8c13a15e3f648e0481ac94df438b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fcdf3e9a7b544486b24e38c730a455424ae439d01023708d7534054b8a3e11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.633912 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638ab0f4-59cd-4702-9e1d-bd3c3a5078e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://703462336bd838630174213adbab5f18d197f9e85a368e9434a59bcee4bc9cda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c904f2110dc75bd88717a1493670eb1f192680ebbaee391d0c74c17ba5838c08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8333f84728e6c7a68a1383eb23ebf5ae3ebf0061b38c63b749cd7ff0cc6823ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41439bd47e0a2e3d858a283075b85648352330035b751e6920cfaa3aadfe80a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://946f29acda45e3eb42f9bfa7c1937d4b991e3b09ddf12bedf855d75008e866d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0358bbfe6fe4950361e085364b0025ee578b6202116b4cb29ac5f448f012682\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c993d5dd12563277c4df7cf38b5e74e77d18f6fb9d348744f7a88007e4ed9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzcqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4kp4h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.657398 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.676740 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d4ea07baf7153bef63dd24bb9e4fbac7f9a3f10380e1890e49afb99496694a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvvt6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jnwbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.705710 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.705784 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.705810 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.705848 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.705872 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.710114 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:59Z\\\",\\\"message\\\":\\\"rator_TCP_cluster\\\\\\\", UUID:\\\\\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/cluster-autoscaler-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.245\\\\\\\", Port:9192, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1123 06:46:59.356234 6982 services_controller.go:360] Finished syncing service m\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxqnz_openshift-ovn-kubernetes(cb5bfadf-3097-45a0-a0d8-2b75e4c1e931)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hdwf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxqnz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.726402 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vzk8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33c56f7a-abc6-48c2-bfe8-53019ba9ed90\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f488666028bc07dccbd4d27c87b35bed8b8a09b0c6fd8050ee48b908b9e880f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vmzbr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:12Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vzk8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.740743 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d6107-c270-4f2f-9cc6-d5994972096d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb9b4b2b63562955b2a40fc44e106090667ddac11379d5bca05a7af8226b9998\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc144a0dd62f8f065dfaee280a342ebb80f1ff9b195a69e7e71ed889ad037ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7hvrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mwg2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.764152 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56b581ac-65f8-438a-b5ed-573a9ea831ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3799f958d51ff3163b59900fc818c22a964cd75fb05d272603bb73034196b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05ed08d2728d6baf95811903a1c16a507eefde0e62269c2737a83b88d4082f66\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0d5ada0a9bc855e756b0cb3b19188c47c4c83ad85ab0a6fbd4563042b11128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bf44a8e62bea615733c9f1d4458ba5293820da14c871ceb9958023351f51846\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f64d894b9ad4159fca195dad03961d5ace212438418e975b4f8a90576e82e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"message\\\":\\\"le observer\\\\nW1123 06:46:08.345827 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1123 06:46:08.346007 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:46:08.347025 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2852476214/tls.crt::/tmp/serving-cert-2852476214/tls.key\\\\\\\"\\\\nI1123 06:46:08.996479 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1123 06:46:09.000394 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1123 06:46:09.000421 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1123 06:46:09.000459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1123 06:46:09.000472 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1123 06:46:09.016849 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1123 06:46:09.016874 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1123 06:46:09.016882 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1123 06:46:09.016890 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1123 06:46:09.016892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1123 06:46:09.016895 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1123 06:46:09.017040 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1123 06:46:09.018250 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7653db7629da32d5f657b3d405371c4db56eb4bde1ffef09384b99ca1bed61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c0976fec9bf48ed8508dc91af096cef8a25ffe813f17ebdcb9dad15b893c14d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.778562 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4486d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cfec62c-b172-462b-b3d4-360ffed40b72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3252c4b64583bd2cd421669fbace5bac2b9d1d57dfc942f98cd0152ba2f53a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ckjjs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4486d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.797820 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4p82c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0dde7218-bd4b-4585-b049-cb8db163fdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:46:56Z\\\",\\\"message\\\":\\\"2025-11-23T06:46:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6\\\\n2025-11-23T06:46:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4b946112-0f5f-4f7b-b23b-fc0e3d7dd9b6 to /host/opt/cni/bin/\\\\n2025-11-23T06:46:11Z [verbose] multus-daemon started\\\\n2025-11-23T06:46:11Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:46:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z6dbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:46:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4p82c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.808801 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.808855 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.808871 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.808896 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.808912 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.818060 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.835670 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://893bc2ec5d68ed2dccef4561d1132da508810a6757f41f2ca41b76926dd6d0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.856532 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf76837-2c10-4d2a-b90a-2e8896ef3868\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a93bd8f5ff862fbfaa75eb5ea685aa7da3c567689a409556d9a5a4b3d08180f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://140e709544fcbc80b25d80353e837aa31f3d312fa94b95d3f59d8bdf9c2f0763\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8ce438c60242278f71887b0a07a1ff9f9d8d568425b7e8a85a26e0fafe7080b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac87f494e76af0022e02fe85f4de85e7dc10d0d073527068d178ea5d3d4901b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.875543 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"254f33a3-ea14-41e4-afc5-313bf40bdbd4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48e119c1ffe252b0c2cae6b18d6a5212c4e10f55452950ac6f840b683ed8c74e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5b9183fa2d4e99e25921d8a51472ac23e536b486c6eca54f91b0428d1ef55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0458c22faa64c23c4ea6f50f28be77da31179f9f59f71dba7fc886c0e891de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:45:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e444976c9ffc1c5fcb000fccd4e78fe0d0c514dce87491b4d83bf84aa98e551\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:45:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:45:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:45:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.895568 4988 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:46:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a59d7ca13a43718959a6ed4b16bfa306d4abef2bf70222b7410ffbdccd0b2340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:47:28Z is after 2025-08-24T17:21:41Z" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.911882 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.911942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.911966 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.911994 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:28 crc kubenswrapper[4988]: I1123 06:47:28.912012 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:28Z","lastTransitionTime":"2025-11-23T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.017779 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.017849 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.017876 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.017904 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.017927 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.120285 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.120414 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.120474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.120499 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.120518 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.223396 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.223454 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.223471 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.223496 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.223515 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.325996 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.326068 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.326091 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.326124 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.326146 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.429102 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.429150 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.429161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.429178 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.429190 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.495444 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:29 crc kubenswrapper[4988]: E1123 06:47:29.495635 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.531516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.531567 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.531583 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.531605 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.531624 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.634943 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.635079 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.635338 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.635366 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.635383 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.739124 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.739234 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.739263 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.739300 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.739325 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.842617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.842678 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.842696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.842757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.842775 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.945942 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.946060 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.946085 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.946115 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:29 crc kubenswrapper[4988]: I1123 06:47:29.946137 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:29Z","lastTransitionTime":"2025-11-23T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.049540 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.049620 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.049646 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.049679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.049704 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.152869 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.152933 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.152951 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.152977 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.152996 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.255849 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.255915 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.255936 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.255965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.255987 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.358808 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.358862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.358878 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.358899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.358915 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.462344 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.462462 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.462486 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.462513 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.462539 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.495506 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.495616 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:30 crc kubenswrapper[4988]: E1123 06:47:30.495777 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.495864 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:30 crc kubenswrapper[4988]: E1123 06:47:30.496017 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:30 crc kubenswrapper[4988]: E1123 06:47:30.496065 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.565630 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.565697 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.565719 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.565749 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.565806 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.668899 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.668955 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.668975 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.668999 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.669015 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.771450 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.771509 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.771532 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.771558 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.771577 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.874113 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.874250 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.874276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.874303 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.874320 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.977265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.977340 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.977358 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.977383 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:30 crc kubenswrapper[4988]: I1123 06:47:30.977401 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:30Z","lastTransitionTime":"2025-11-23T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.080315 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.080373 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.080392 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.080422 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.080442 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.183887 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.183954 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.183972 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.184001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.184067 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.286525 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.286563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.286573 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.286588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.286599 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.389428 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.389485 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.389500 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.389528 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.389545 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.493571 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.493626 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.493645 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.493669 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.493686 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.496162 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:31 crc kubenswrapper[4988]: E1123 06:47:31.496421 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.596521 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.596601 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.596617 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.596666 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.596690 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.700740 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.700818 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.700836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.700869 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.700895 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.804895 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.804963 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.804983 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.805013 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.805032 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.908105 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.908161 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.908182 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.908245 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:31 crc kubenswrapper[4988]: I1123 06:47:31.908268 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:31Z","lastTransitionTime":"2025-11-23T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.011420 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.011471 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.011489 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.011515 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.011532 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.114393 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.114452 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.114470 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.114497 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.114515 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.217412 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.217492 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.217518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.217552 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.217576 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.319971 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.320082 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.320098 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.320117 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.320131 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.423463 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.423518 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.423536 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.423559 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.423576 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.495822 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.495838 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.495974 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:32 crc kubenswrapper[4988]: E1123 06:47:32.496150 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:32 crc kubenswrapper[4988]: E1123 06:47:32.496309 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:32 crc kubenswrapper[4988]: E1123 06:47:32.496499 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.526004 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.526075 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.526101 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.526128 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.526145 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.628643 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.628696 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.628713 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.628736 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.628753 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.731783 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.731836 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.731853 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.731877 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.731898 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.834866 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.834933 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.834949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.834972 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.834989 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.937751 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.937831 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.937853 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.937884 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:32 crc kubenswrapper[4988]: I1123 06:47:32.937907 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:32Z","lastTransitionTime":"2025-11-23T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.041434 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.041521 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.041539 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.041563 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.041580 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.147639 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.147900 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.147922 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.147949 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.147969 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.252958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.253018 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.253035 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.253059 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.253076 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.356679 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.356737 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.356755 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.356778 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.356796 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.459711 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.459769 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.459790 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.459814 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.459832 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.495773 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:33 crc kubenswrapper[4988]: E1123 06:47:33.496350 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.563958 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.564008 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.564025 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.564049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.564088 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.667355 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.667425 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.667444 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.667474 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.667494 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.771619 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.771688 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.771706 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.771735 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.771758 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.874142 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.874217 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.874232 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.874251 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.874264 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.978832 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.978889 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.978901 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.978918 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:33 crc kubenswrapper[4988]: I1123 06:47:33.978929 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:33Z","lastTransitionTime":"2025-11-23T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.082669 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.082734 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.082752 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.082780 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.082801 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.185972 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.186051 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.186070 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.186112 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.186130 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.289685 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.289740 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.289754 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.289776 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.289791 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.393544 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.393631 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.393649 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.393677 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.393695 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.495278 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.495381 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:34 crc kubenswrapper[4988]: E1123 06:47:34.495501 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:34 crc kubenswrapper[4988]: E1123 06:47:34.495637 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.495807 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:34 crc kubenswrapper[4988]: E1123 06:47:34.495959 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.497843 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.497909 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.497928 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.497962 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.497983 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.602001 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.602081 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.602100 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.602126 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.602144 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.705656 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.705838 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.705862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.705897 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.705918 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.810097 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.810163 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.810182 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.810234 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.810252 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.913272 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.913342 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.913365 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.913394 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:34 crc kubenswrapper[4988]: I1123 06:47:34.913418 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:34Z","lastTransitionTime":"2025-11-23T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.017293 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.017348 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.017360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.017378 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.017389 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.120475 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.120532 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.120549 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.120574 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.120600 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.223652 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.223728 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.223746 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.223774 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.223796 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.327129 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.327242 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.327265 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.327291 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.327311 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.430858 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.430920 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.430940 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.430968 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.430985 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.495749 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:35 crc kubenswrapper[4988]: E1123 06:47:35.496028 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.534588 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.534638 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.534655 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.534682 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.534699 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.638302 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.638368 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.638383 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.638409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.638426 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.742381 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.742464 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.742487 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.742517 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.742536 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.845751 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.845879 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.845898 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.845921 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.845939 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.949409 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.949482 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.949524 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.949555 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:35 crc kubenswrapper[4988]: I1123 06:47:35.949578 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:35Z","lastTransitionTime":"2025-11-23T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.053049 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.053138 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.053163 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.053230 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.053261 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.156737 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.156821 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.156854 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.156888 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.156912 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.260576 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.260636 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.260653 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.260678 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.260695 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.363160 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.363256 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.363276 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.363304 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.363328 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.466757 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.466862 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.466886 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.466916 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.466937 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.495346 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:36 crc kubenswrapper[4988]: E1123 06:47:36.495546 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.495561 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:36 crc kubenswrapper[4988]: E1123 06:47:36.495740 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.495346 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:36 crc kubenswrapper[4988]: E1123 06:47:36.496013 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.570714 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.570880 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.570910 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.570981 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.571003 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.674274 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.674377 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.674445 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.674516 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.674540 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.777724 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.777787 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.777802 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.777824 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.777839 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.881182 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.881300 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.881319 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.881351 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.881375 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.984306 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.984360 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.984375 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.984397 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:36 crc kubenswrapper[4988]: I1123 06:47:36.984412 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:36Z","lastTransitionTime":"2025-11-23T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.087809 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.087857 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.087868 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.087885 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.087899 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:37Z","lastTransitionTime":"2025-11-23T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.190495 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.190558 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.190575 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.190788 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.190811 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:37Z","lastTransitionTime":"2025-11-23T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.225886 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.225965 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.225990 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.226026 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.226049 4988 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:47:37Z","lastTransitionTime":"2025-11-23T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.305244 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm"] Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.305910 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.309057 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.309181 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.309765 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.309785 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.356732 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.356705577 podStartE2EDuration="1m29.356705577s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.334490487 +0000 UTC m=+109.643003320" watchObservedRunningTime="2025-11-23 06:47:37.356705577 +0000 UTC m=+109.665218350" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.357460 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.357452135 podStartE2EDuration="56.357452135s" podCreationTimestamp="2025-11-23 06:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.357406154 +0000 UTC m=+109.665918957" watchObservedRunningTime="2025-11-23 06:47:37.357452135 +0000 UTC m=+109.665964908" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.437542 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.437655 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f18415a5-1a16-4669-8a2d-a3828b71901f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.437711 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18415a5-1a16-4669-8a2d-a3828b71901f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.437776 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.437821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f18415a5-1a16-4669-8a2d-a3828b71901f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.459930 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.459902978 podStartE2EDuration="31.459902978s" podCreationTimestamp="2025-11-23 06:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.459849006 +0000 UTC m=+109.768361829" watchObservedRunningTime="2025-11-23 06:47:37.459902978 +0000 UTC m=+109.768415751" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.495647 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:37 crc kubenswrapper[4988]: E1123 06:47:37.495873 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.524569 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=85.524549536 podStartE2EDuration="1m25.524549536s" podCreationTimestamp="2025-11-23 06:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.523525841 +0000 UTC m=+109.832038624" watchObservedRunningTime="2025-11-23 06:47:37.524549536 +0000 UTC m=+109.833062309" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.538896 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.538944 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f18415a5-1a16-4669-8a2d-a3828b71901f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.539022 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.539047 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f18415a5-1a16-4669-8a2d-a3828b71901f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.539088 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18415a5-1a16-4669-8a2d-a3828b71901f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.540187 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.540350 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f18415a5-1a16-4669-8a2d-a3828b71901f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.540756 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f18415a5-1a16-4669-8a2d-a3828b71901f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.546850 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18415a5-1a16-4669-8a2d-a3828b71901f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.558812 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f18415a5-1a16-4669-8a2d-a3828b71901f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2srm\" (UID: \"f18415a5-1a16-4669-8a2d-a3828b71901f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.623935 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.654149 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podStartSLOduration=89.654119989 podStartE2EDuration="1m29.654119989s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.617709329 +0000 UTC m=+109.926222132" watchObservedRunningTime="2025-11-23 06:47:37.654119989 +0000 UTC m=+109.962632772" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.676177 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4kp4h" podStartSLOduration=89.676159604 podStartE2EDuration="1m29.676159604s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.675219061 +0000 UTC m=+109.983731854" watchObservedRunningTime="2025-11-23 06:47:37.676159604 +0000 UTC m=+109.984672377" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.696989 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.696949518 podStartE2EDuration="1m28.696949518s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.694478547 +0000 UTC m=+110.002991310" watchObservedRunningTime="2025-11-23 06:47:37.696949518 +0000 UTC m=+110.005462281" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.710559 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4486d" podStartSLOduration=89.710545244 podStartE2EDuration="1m29.710545244s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.708799421 +0000 UTC m=+110.017312184" watchObservedRunningTime="2025-11-23 06:47:37.710545244 +0000 UTC m=+110.019058007" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.723062 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4p82c" podStartSLOduration=89.723047023 podStartE2EDuration="1m29.723047023s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.72251533 +0000 UTC m=+110.031028103" watchObservedRunningTime="2025-11-23 06:47:37.723047023 +0000 UTC m=+110.031559786" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.734049 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vzk8l" podStartSLOduration=88.734035645 podStartE2EDuration="1m28.734035645s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.7334145 +0000 UTC m=+110.041927273" watchObservedRunningTime="2025-11-23 06:47:37.734035645 +0000 UTC m=+110.042548408" Nov 23 06:47:37 crc kubenswrapper[4988]: I1123 06:47:37.748540 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mwg2z" podStartSLOduration=88.748523753 podStartE2EDuration="1m28.748523753s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:37.74717579 +0000 UTC m=+110.055688553" watchObservedRunningTime="2025-11-23 06:47:37.748523753 +0000 UTC m=+110.057036526" Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.176765 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" event={"ID":"f18415a5-1a16-4669-8a2d-a3828b71901f","Type":"ContainerStarted","Data":"c6f4b7b736da318eab1935237052a8151bcc15f49c30e313c50432c381afa662"} Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.176812 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" event={"ID":"f18415a5-1a16-4669-8a2d-a3828b71901f","Type":"ContainerStarted","Data":"45bcb5d2cdedc4aae61ec0e7e5ec9d3d17b288faeefc3217836962773665bf38"} Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.198152 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2srm" podStartSLOduration=89.198121769 podStartE2EDuration="1m29.198121769s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:38.197466242 +0000 UTC m=+110.505979095" watchObservedRunningTime="2025-11-23 06:47:38.198121769 +0000 UTC m=+110.506634612" Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.495471 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.495484 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:38 crc kubenswrapper[4988]: E1123 06:47:38.497303 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:38 crc kubenswrapper[4988]: I1123 06:47:38.497563 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:38 crc kubenswrapper[4988]: E1123 06:47:38.497679 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:38 crc kubenswrapper[4988]: E1123 06:47:38.498163 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:39 crc kubenswrapper[4988]: I1123 06:47:39.495803 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:39 crc kubenswrapper[4988]: E1123 06:47:39.496013 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:40 crc kubenswrapper[4988]: I1123 06:47:40.495750 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:40 crc kubenswrapper[4988]: E1123 06:47:40.495980 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:40 crc kubenswrapper[4988]: I1123 06:47:40.496457 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:40 crc kubenswrapper[4988]: E1123 06:47:40.496593 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:40 crc kubenswrapper[4988]: I1123 06:47:40.496784 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:40 crc kubenswrapper[4988]: I1123 06:47:40.496979 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:47:40 crc kubenswrapper[4988]: E1123 06:47:40.497021 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.192168 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/3.log" Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.196115 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerStarted","Data":"a2a5dbc04610d0a4a8e6e80a2ce783434d59d7f84da7b04c4a1c7fba5e900935"} Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.196743 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.227704 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podStartSLOduration=93.227681585 podStartE2EDuration="1m33.227681585s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:41.226954657 +0000 UTC m=+113.535467440" watchObservedRunningTime="2025-11-23 06:47:41.227681585 +0000 UTC m=+113.536194358" Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.495899 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:41 crc kubenswrapper[4988]: E1123 06:47:41.496082 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.554343 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l5wgs"] Nov 23 06:47:41 crc kubenswrapper[4988]: I1123 06:47:41.554575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:41 crc kubenswrapper[4988]: E1123 06:47:41.554697 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:42 crc kubenswrapper[4988]: I1123 06:47:42.495926 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:42 crc kubenswrapper[4988]: I1123 06:47:42.496006 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:42 crc kubenswrapper[4988]: E1123 06:47:42.496071 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:47:42 crc kubenswrapper[4988]: E1123 06:47:42.496400 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.495334 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.495397 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:43 crc kubenswrapper[4988]: E1123 06:47:43.495513 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:47:43 crc kubenswrapper[4988]: E1123 06:47:43.495734 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l5wgs" podUID="1a94eb06-d03a-43c9-8004-73d48280435f" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.817691 4988 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.817878 4988 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.875543 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.876575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.876873 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.877565 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.879678 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.885597 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.886810 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.886808 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.886971 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887045 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887072 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887146 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887165 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887169 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887072 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887297 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887286 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887390 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.887509 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.888678 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.889816 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-krd2k"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.889972 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.891986 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.892853 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.893288 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.893406 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.899529 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.900278 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.901832 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908511 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-image-import-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908609 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908667 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-client\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908711 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2qw\" (UniqueName: \"kubernetes.io/projected/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-kube-api-access-mv2qw\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908752 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-encryption-config\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-client\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908872 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908915 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctbb\" (UniqueName: \"kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908953 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-dir\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.908986 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909034 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909073 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cb9v\" (UniqueName: \"kubernetes.io/projected/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-kube-api-access-6cb9v\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md29l\" (UniqueName: \"kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909154 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909220 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909279 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit-dir\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909368 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909417 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/712ae0d6-d5ac-49ad-942c-ce2b5184526e-machine-approver-tls\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909453 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909494 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-policies\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909533 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909580 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909639 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-auth-proxy-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909682 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-encryption-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909723 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909757 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909801 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909839 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909910 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-serving-cert\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.909995 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlvq\" (UniqueName: \"kubernetes.io/projected/712ae0d6-d5ac-49ad-942c-ce2b5184526e-kube-api-access-mvlvq\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910038 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910079 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-serving-cert\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910122 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910161 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-trusted-ca-bundle\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910229 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910272 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910309 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910339 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-serving-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.910379 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-node-pullsecrets\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.911278 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.911305 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.911747 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.912044 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.912490 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.912719 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.912900 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.913102 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.913328 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.913626 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.916686 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.917742 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.918259 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.918610 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.918967 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.919362 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.922855 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.923070 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.923120 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.923475 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.923747 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.923825 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.938530 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.938731 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.939102 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.939423 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.939701 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j9nr5"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.939968 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.940132 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b48qm"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.940505 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mrd7r"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.940830 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.940947 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.940896 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.941083 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.941851 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942122 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942283 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942365 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942487 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942515 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.942532 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.943852 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6hs4l"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.944124 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gvbhh"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.944301 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.947995 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948080 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948115 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-krd2k"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948141 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948282 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948320 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.948862 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.949251 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.949333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.949670 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.950502 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.950964 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.951713 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.952442 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.954205 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.957477 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.959243 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.984370 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.985207 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.986558 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.989587 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.989823 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.989969 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.990001 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.990103 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.990207 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.990117 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.990407 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b"] Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.991613 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.992765 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.993340 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.993932 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994058 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994286 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994515 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994637 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994789 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.994887 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.995579 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.995980 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996269 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996332 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996441 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996500 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996564 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996602 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996562 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996728 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.996824 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997054 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997169 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997208 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997232 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997289 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997376 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997395 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997467 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.997578 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.998105 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.998265 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.998594 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.998688 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.999004 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.999162 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 06:47:43 crc kubenswrapper[4988]: I1123 06:47:43.999336 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.001291 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.003384 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.008720 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.008893 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.021292 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.021642 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.022459 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.024241 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.024302 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.025300 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.025418 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.026506 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-node-pullsecrets\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.026842 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-node-pullsecrets\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027089 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-image-import-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027119 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027148 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-client\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2qw\" (UniqueName: \"kubernetes.io/projected/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-kube-api-access-mv2qw\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027202 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027221 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-encryption-config\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027241 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038f34fe-6053-4d30-bea5-a694f4a18cf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027373 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.027464 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-client\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.028902 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.029386 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.029900 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030796 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030837 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctbb\" (UniqueName: \"kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030890 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-dir\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030962 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030957 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.030984 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cb9v\" (UniqueName: \"kubernetes.io/projected/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-kube-api-access-6cb9v\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031005 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md29l\" (UniqueName: \"kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031031 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031063 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031115 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34fe-6053-4d30-bea5-a694f4a18cf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031135 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit-dir\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031179 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031211 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/712ae0d6-d5ac-49ad-942c-ce2b5184526e-machine-approver-tls\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031232 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031252 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-policies\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031275 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031303 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031337 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-auth-proxy-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031362 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-encryption-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031382 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031415 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031431 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031448 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-serving-cert\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031463 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031485 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlvq\" (UniqueName: \"kubernetes.io/projected/712ae0d6-d5ac-49ad-942c-ce2b5184526e-kube-api-access-mvlvq\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031502 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031536 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031551 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-serving-cert\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031567 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031586 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-trusted-ca-bundle\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-serving-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031623 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031641 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031659 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsqn6\" (UniqueName: \"kubernetes.io/projected/038f34fe-6053-4d30-bea5-a694f4a18cf4-kube-api-access-hsqn6\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.031601 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-image-import-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.032345 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.032353 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-auth-proxy-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.032793 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.032834 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit-dir\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.033081 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.033906 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.034232 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-policies\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.034983 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.035005 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/712ae0d6-d5ac-49ad-942c-ce2b5184526e-config\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.035568 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-audit-dir\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.035688 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.035899 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-etcd-client\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036045 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036104 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036142 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m24t9"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036290 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036526 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-serving-ca\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.036689 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.037091 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-audit\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.037141 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.037298 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.037931 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.038060 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.038343 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-trusted-ca-bundle\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.039025 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.041656 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h7nl4"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.042531 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.042654 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.043342 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.044293 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qm85c"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.045045 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.046495 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.046975 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.047051 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.047487 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.051973 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.053159 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.053346 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.053865 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.054319 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.055264 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.057250 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.057946 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.058211 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.060443 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.061115 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.062976 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-encryption-config\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.063181 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-76t4n"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.063282 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.063722 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.063773 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.064106 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-etcd-client\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.064270 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.064885 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.068299 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.068810 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.068981 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.070304 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.070795 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.071092 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.071500 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.074891 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b847l"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.076272 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.076296 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6hs4l"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.076359 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.079793 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.080602 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sfvqw"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.082853 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.085263 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-serving-cert\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.085265 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-serving-cert\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.087033 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.087065 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.087146 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.087165 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.087739 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-encryption-config\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.088241 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/712ae0d6-d5ac-49ad-942c-ce2b5184526e-machine-approver-tls\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.088883 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b48qm"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.090455 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.091802 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j9nr5"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.094744 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.096150 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.099275 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.111983 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.131028 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.131083 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.131095 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133626 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27fe19de-ad11-4994-954a-754a1f6f57ae-serving-cert\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133669 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44706da4-47de-42d4-a6ad-3ba5016b1b6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133736 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133762 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038f34fe-6053-4d30-bea5-a694f4a18cf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133796 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd280302-59fb-4175-bbd8-f6376ece7337-serving-cert\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133828 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a906f419-a9c8-480a-9824-4a9971c6d1ec-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133852 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aec85a9-cf10-4f54-9269-aab56ed0378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133874 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40399b8a-0985-4070-875a-5e7922b19cc2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133895 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44706da4-47de-42d4-a6ad-3ba5016b1b6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133913 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40399b8a-0985-4070-875a-5e7922b19cc2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.133935 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.134478 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.148879 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mrd7r"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.148937 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gvbhh"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.149776 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.150900 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/038f34fe-6053-4d30-bea5-a694f4a18cf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151235 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151265 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dmr\" (UniqueName: \"kubernetes.io/projected/a42d143b-74b0-4ff4-b3c1-6bd59e656461-kube-api-access-89dmr\") pod \"downloads-7954f5f757-6hs4l\" (UID: \"a42d143b-74b0-4ff4-b3c1-6bd59e656461\") " pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151288 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgdb\" (UniqueName: \"kubernetes.io/projected/3296e499-a099-4bc7-b89a-ec155509e956-kube-api-access-7zgdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151306 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151328 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13a3afe5-ef35-4dfc-8842-88a3425a5397-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151343 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-service-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151376 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40399b8a-0985-4070-875a-5e7922b19cc2-config\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151406 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296e499-a099-4bc7-b89a-ec155509e956-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151426 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65vs\" (UniqueName: \"kubernetes.io/projected/5aec85a9-cf10-4f54-9269-aab56ed0378a-kube-api-access-q65vs\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151445 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151460 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vzx\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-kube-api-access-w5vzx\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151478 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7520c71d-7367-4da7-9554-43cca9b53833-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151492 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-config\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151507 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151526 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljk5j\" (UniqueName: \"kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151546 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-config\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151561 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-client\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151576 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13a3afe5-ef35-4dfc-8842-88a3425a5397-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151599 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34fe-6053-4d30-bea5-a694f4a18cf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151626 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-trusted-ca\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151642 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fr2j\" (UniqueName: \"kubernetes.io/projected/27fe19de-ad11-4994-954a-754a1f6f57ae-kube-api-access-4fr2j\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151658 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151674 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-config\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151699 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-config\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151733 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-serving-cert\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151753 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qqcq\" (UniqueName: \"kubernetes.io/projected/2d8b34e7-0051-4130-b6eb-289532de720c-kube-api-access-2qqcq\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151772 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7520c71d-7367-4da7-9554-43cca9b53833-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520c71d-7367-4da7-9554-43cca9b53833-config\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151801 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151823 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvkk\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-kube-api-access-rlvkk\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151837 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vfn\" (UniqueName: \"kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151862 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w749v\" (UniqueName: \"kubernetes.io/projected/a906f419-a9c8-480a-9824-4a9971c6d1ec-kube-api-access-w749v\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151883 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151903 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151925 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxn6\" (UniqueName: \"kubernetes.io/projected/bd280302-59fb-4175-bbd8-f6376ece7337-kube-api-access-mpxn6\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151951 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151964 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-images\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.151981 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296e499-a099-4bc7-b89a-ec155509e956-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.152012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsqn6\" (UniqueName: \"kubernetes.io/projected/038f34fe-6053-4d30-bea5-a694f4a18cf4-kube-api-access-hsqn6\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.152026 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.152041 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.152711 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038f34fe-6053-4d30-bea5-a694f4a18cf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.153301 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.153336 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sfvqw"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.160155 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.164455 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.164596 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.164688 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.166744 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m24t9"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.167789 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dzkgh"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.168469 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.169061 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s62tf"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.169708 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.170252 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.171350 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.172408 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.173750 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.174812 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.175726 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.176919 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.177953 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b847l"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.178743 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.179306 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s62tf"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.180522 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-76t4n"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.181745 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h7nl4"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.182922 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzkgh"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.183980 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-67798"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.184583 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.199182 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.218584 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.238567 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252507 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27fe19de-ad11-4994-954a-754a1f6f57ae-serving-cert\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252548 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252582 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44706da4-47de-42d4-a6ad-3ba5016b1b6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252603 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252637 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd280302-59fb-4175-bbd8-f6376ece7337-serving-cert\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252661 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a906f419-a9c8-480a-9824-4a9971c6d1ec-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252693 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40399b8a-0985-4070-875a-5e7922b19cc2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252717 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aec85a9-cf10-4f54-9269-aab56ed0378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252742 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252763 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44706da4-47de-42d4-a6ad-3ba5016b1b6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252784 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40399b8a-0985-4070-875a-5e7922b19cc2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252822 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dmr\" (UniqueName: \"kubernetes.io/projected/a42d143b-74b0-4ff4-b3c1-6bd59e656461-kube-api-access-89dmr\") pod \"downloads-7954f5f757-6hs4l\" (UID: \"a42d143b-74b0-4ff4-b3c1-6bd59e656461\") " pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252867 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252905 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgdb\" (UniqueName: \"kubernetes.io/projected/3296e499-a099-4bc7-b89a-ec155509e956-kube-api-access-7zgdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252940 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296e499-a099-4bc7-b89a-ec155509e956-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252969 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13a3afe5-ef35-4dfc-8842-88a3425a5397-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.252997 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-service-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253020 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40399b8a-0985-4070-875a-5e7922b19cc2-config\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253048 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253082 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65vs\" (UniqueName: \"kubernetes.io/projected/5aec85a9-cf10-4f54-9269-aab56ed0378a-kube-api-access-q65vs\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253115 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-config\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vzx\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-kube-api-access-w5vzx\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253178 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7520c71d-7367-4da7-9554-43cca9b53833-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253229 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljk5j\" (UniqueName: \"kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253254 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253277 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-client\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253312 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-config\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253346 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13a3afe5-ef35-4dfc-8842-88a3425a5397-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-trusted-ca\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253419 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253441 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fr2j\" (UniqueName: \"kubernetes.io/projected/27fe19de-ad11-4994-954a-754a1f6f57ae-kube-api-access-4fr2j\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253462 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-config\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253516 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-config\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253537 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-serving-cert\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253560 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7520c71d-7367-4da7-9554-43cca9b53833-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253584 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qqcq\" (UniqueName: \"kubernetes.io/projected/2d8b34e7-0051-4130-b6eb-289532de720c-kube-api-access-2qqcq\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253615 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520c71d-7367-4da7-9554-43cca9b53833-config\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253650 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253673 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvkk\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-kube-api-access-rlvkk\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253695 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w749v\" (UniqueName: \"kubernetes.io/projected/a906f419-a9c8-480a-9824-4a9971c6d1ec-kube-api-access-w749v\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253717 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89vfn\" (UniqueName: \"kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253751 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253772 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253798 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxn6\" (UniqueName: \"kubernetes.io/projected/bd280302-59fb-4175-bbd8-f6376ece7337-kube-api-access-mpxn6\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253830 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296e499-a099-4bc7-b89a-ec155509e956-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253853 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253873 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-images\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253904 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.253925 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.256430 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.256899 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-service-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.257255 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-config\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.257579 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5aec85a9-cf10-4f54-9269-aab56ed0378a-images\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.258335 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-config\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.258569 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-trusted-ca\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.259105 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27fe19de-ad11-4994-954a-754a1f6f57ae-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.259484 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.260127 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd280302-59fb-4175-bbd8-f6376ece7337-config\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.260633 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3296e499-a099-4bc7-b89a-ec155509e956-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.261415 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.261429 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.261495 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.261588 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-config\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.261795 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13a3afe5-ef35-4dfc-8842-88a3425a5397-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262018 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3296e499-a099-4bc7-b89a-ec155509e956-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262223 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-client\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262257 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262405 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd280302-59fb-4175-bbd8-f6376ece7337-serving-cert\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262823 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5aec85a9-cf10-4f54-9269-aab56ed0378a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.262947 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.263649 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8b34e7-0051-4130-b6eb-289532de720c-serving-cert\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.264557 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.264727 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8b34e7-0051-4130-b6eb-289532de720c-etcd-service-ca\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.265107 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40399b8a-0985-4070-875a-5e7922b19cc2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.265241 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.265264 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a906f419-a9c8-480a-9824-4a9971c6d1ec-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.265308 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40399b8a-0985-4070-875a-5e7922b19cc2-config\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.265466 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27fe19de-ad11-4994-954a-754a1f6f57ae-serving-cert\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.267316 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/13a3afe5-ef35-4dfc-8842-88a3425a5397-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.268987 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.269651 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.269817 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.279454 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.299145 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.308431 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44706da4-47de-42d4-a6ad-3ba5016b1b6f-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.319888 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.350465 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.355760 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44706da4-47de-42d4-a6ad-3ba5016b1b6f-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.359451 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.378675 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.398750 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.419765 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.439870 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.460823 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.479731 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.495684 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.496457 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.499050 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.511893 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7520c71d-7367-4da7-9554-43cca9b53833-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.519802 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.527903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520c71d-7367-4da7-9554-43cca9b53833-config\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.566136 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2qw\" (UniqueName: \"kubernetes.io/projected/ffa18810-f7ea-407d-9bdf-9e2e3ecd2250-kube-api-access-mv2qw\") pod \"apiserver-76f77b778f-krd2k\" (UID: \"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250\") " pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.598979 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctbb\" (UniqueName: \"kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb\") pod \"oauth-openshift-558db77b4-xg5r6\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.614571 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlvq\" (UniqueName: \"kubernetes.io/projected/712ae0d6-d5ac-49ad-942c-ce2b5184526e-kube-api-access-mvlvq\") pod \"machine-approver-56656f9798-bd6xn\" (UID: \"712ae0d6-d5ac-49ad-942c-ce2b5184526e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.632338 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.636300 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cb9v\" (UniqueName: \"kubernetes.io/projected/dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4-kube-api-access-6cb9v\") pod \"apiserver-7bbb656c7d-r4rmr\" (UID: \"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.659301 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.662004 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md29l\" (UniqueName: \"kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l\") pod \"route-controller-manager-6576b87f9c-5pxjv\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.679877 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.699054 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.720140 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.750621 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.759609 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.784771 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.799707 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.819018 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.843093 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.849517 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.853958 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-krd2k"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.860022 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 06:47:44 crc kubenswrapper[4988]: W1123 06:47:44.867531 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffa18810_f7ea_407d_9bdf_9e2e3ecd2250.slice/crio-4d21ca8b4ecae8117f65f4606b07cea5e24a2efa6a77685a1d4f38f6c87cd946 WatchSource:0}: Error finding container 4d21ca8b4ecae8117f65f4606b07cea5e24a2efa6a77685a1d4f38f6c87cd946: Status 404 returned error can't find the container with id 4d21ca8b4ecae8117f65f4606b07cea5e24a2efa6a77685a1d4f38f6c87cd946 Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.880533 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.886162 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.900126 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.921448 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.931537 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.939218 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.951992 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.959118 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.978877 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 06:47:44 crc kubenswrapper[4988]: W1123 06:47:44.991551 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ff29d2_357b_4ac7_9b6c_cfcd50ba3216.slice/crio-b0ac5d057602a0727c6297344f667b88a9374d0040264c2f3bb680a398c54b9c WatchSource:0}: Error finding container b0ac5d057602a0727c6297344f667b88a9374d0040264c2f3bb680a398c54b9c: Status 404 returned error can't find the container with id b0ac5d057602a0727c6297344f667b88a9374d0040264c2f3bb680a398c54b9c Nov 23 06:47:44 crc kubenswrapper[4988]: I1123 06:47:44.999524 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.018863 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.039328 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.047611 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr"] Nov 23 06:47:45 crc kubenswrapper[4988]: W1123 06:47:45.055393 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7ae3d7_2238_4323_b9af_fe2af7ebd3d4.slice/crio-28730990dd068c7be86b68075b2432fee2e6b0310ad6f7807e716247611585b6 WatchSource:0}: Error finding container 28730990dd068c7be86b68075b2432fee2e6b0310ad6f7807e716247611585b6: Status 404 returned error can't find the container with id 28730990dd068c7be86b68075b2432fee2e6b0310ad6f7807e716247611585b6 Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.058792 4988 request.go:700] Waited for 1.011460786s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.062156 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.079252 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.099725 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.119383 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.142242 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.159151 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.176519 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.179551 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: W1123 06:47:45.187749 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38368132_21a2_414a_8b15_b5c648bb871e.slice/crio-2aeed15f774f349e49590e295f42d3082e5501bd5a637d8eb9fbc1bbe05601b1 WatchSource:0}: Error finding container 2aeed15f774f349e49590e295f42d3082e5501bd5a637d8eb9fbc1bbe05601b1: Status 404 returned error can't find the container with id 2aeed15f774f349e49590e295f42d3082e5501bd5a637d8eb9fbc1bbe05601b1 Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.199576 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.212380 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" event={"ID":"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250","Type":"ContainerStarted","Data":"513942b8df5a9ce2229d160ca614c5c9602d9d534a55c6d81aa37149df5aef51"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.212416 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" event={"ID":"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250","Type":"ContainerStarted","Data":"4d21ca8b4ecae8117f65f4606b07cea5e24a2efa6a77685a1d4f38f6c87cd946"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.214796 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" event={"ID":"38368132-21a2-414a-8b15-b5c648bb871e","Type":"ContainerStarted","Data":"2aeed15f774f349e49590e295f42d3082e5501bd5a637d8eb9fbc1bbe05601b1"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.217181 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" event={"ID":"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4","Type":"ContainerStarted","Data":"28730990dd068c7be86b68075b2432fee2e6b0310ad6f7807e716247611585b6"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.218462 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.218696 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" event={"ID":"712ae0d6-d5ac-49ad-942c-ce2b5184526e","Type":"ContainerStarted","Data":"e3e27565c1cf58927dc032c03e6f3b32703e299a5b78cce29d3e27a4e70d5c38"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.218720 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" event={"ID":"712ae0d6-d5ac-49ad-942c-ce2b5184526e","Type":"ContainerStarted","Data":"59056f41b0e0f5f040f4747ccec69d4a275f3e230766c1318b621da1b656ce5c"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.219583 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" event={"ID":"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216","Type":"ContainerStarted","Data":"b0ac5d057602a0727c6297344f667b88a9374d0040264c2f3bb680a398c54b9c"} Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.238653 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.258968 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.279302 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.299257 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.318839 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.338779 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.359612 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.379103 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.399945 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.419570 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.439891 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.459167 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.480313 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.495081 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.495083 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.499718 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.519732 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.539501 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.559880 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.579793 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.600345 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.625317 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.639247 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.660392 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.678656 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.699339 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.720076 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.739039 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.758499 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.779127 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.799351 4988 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.819397 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.859420 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.875994 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsqn6\" (UniqueName: \"kubernetes.io/projected/038f34fe-6053-4d30-bea5-a694f4a18cf4-kube-api-access-hsqn6\") pod \"openshift-apiserver-operator-796bbdcf4f-b8n75\" (UID: \"038f34fe-6053-4d30-bea5-a694f4a18cf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.880592 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.901128 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.919360 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.940261 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.960281 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 06:47:45 crc kubenswrapper[4988]: I1123 06:47:45.979947 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.027402 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.030523 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.030594 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.039789 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.078405 4988 request.go:700] Waited for 1.824658009s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.091400 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40399b8a-0985-4070-875a-5e7922b19cc2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-fnvcf\" (UID: \"40399b8a-0985-4070-875a-5e7922b19cc2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.100404 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgdb\" (UniqueName: \"kubernetes.io/projected/3296e499-a099-4bc7-b89a-ec155509e956-kube-api-access-7zgdb\") pod \"openshift-controller-manager-operator-756b6f6bc6-pptgb\" (UID: \"3296e499-a099-4bc7-b89a-ec155509e956\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.102078 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.117681 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dmr\" (UniqueName: \"kubernetes.io/projected/a42d143b-74b0-4ff4-b3c1-6bd59e656461-kube-api-access-89dmr\") pod \"downloads-7954f5f757-6hs4l\" (UID: \"a42d143b-74b0-4ff4-b3c1-6bd59e656461\") " pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.135985 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vzx\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-kube-api-access-w5vzx\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.170860 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fr2j\" (UniqueName: \"kubernetes.io/projected/27fe19de-ad11-4994-954a-754a1f6f57ae-kube-api-access-4fr2j\") pod \"authentication-operator-69f744f599-b48qm\" (UID: \"27fe19de-ad11-4994-954a-754a1f6f57ae\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.179995 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65vs\" (UniqueName: \"kubernetes.io/projected/5aec85a9-cf10-4f54-9269-aab56ed0378a-kube-api-access-q65vs\") pod \"machine-api-operator-5694c8668f-j9nr5\" (UID: \"5aec85a9-cf10-4f54-9269-aab56ed0378a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.203842 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljk5j\" (UniqueName: \"kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j\") pod \"console-f9d7485db-gfkdg\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.224789 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89vfn\" (UniqueName: \"kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn\") pod \"controller-manager-879f6c89f-dj2p8\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.226877 4988 generic.go:334] "Generic (PLEG): container finished" podID="ffa18810-f7ea-407d-9bdf-9e2e3ecd2250" containerID="513942b8df5a9ce2229d160ca614c5c9602d9d534a55c6d81aa37149df5aef51" exitCode=0 Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.226956 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" event={"ID":"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250","Type":"ContainerDied","Data":"513942b8df5a9ce2229d160ca614c5c9602d9d534a55c6d81aa37149df5aef51"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.226991 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" event={"ID":"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250","Type":"ContainerStarted","Data":"be7112508b4a72655b9d8bb6a4fce0813e1fb2486c1c73cf7bfd4e6015f5c5a7"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.227004 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" event={"ID":"ffa18810-f7ea-407d-9bdf-9e2e3ecd2250","Type":"ContainerStarted","Data":"9b5593b0ebd5adee9693c9b144a6999755f469c3e5a3533a3fa8cc4555ac9afa"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.230571 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7520c71d-7367-4da7-9554-43cca9b53833-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbs8b\" (UID: \"7520c71d-7367-4da7-9554-43cca9b53833\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.231818 4988 generic.go:334] "Generic (PLEG): container finished" podID="dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4" containerID="4f1a9ac98cf290983c155ee45499e91176e1219098834f09fafe10e51ac2f7e5" exitCode=0 Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.231881 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" event={"ID":"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4","Type":"ContainerDied","Data":"4f1a9ac98cf290983c155ee45499e91176e1219098834f09fafe10e51ac2f7e5"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.234241 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" event={"ID":"38368132-21a2-414a-8b15-b5c648bb871e","Type":"ContainerStarted","Data":"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.234273 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.236516 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" event={"ID":"712ae0d6-d5ac-49ad-942c-ce2b5184526e","Type":"ContainerStarted","Data":"10a611339a7c054cec3783e678371599562b3171b6134f6b51f5566ba4e20200"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.240249 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" event={"ID":"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216","Type":"ContainerStarted","Data":"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2"} Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.240913 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.259740 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qqcq\" (UniqueName: \"kubernetes.io/projected/2d8b34e7-0051-4130-b6eb-289532de720c-kube-api-access-2qqcq\") pod \"etcd-operator-b45778765-gvbhh\" (UID: \"2d8b34e7-0051-4130-b6eb-289532de720c\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.269397 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.279613 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.282257 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.293946 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.297314 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.303092 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvkk\" (UniqueName: \"kubernetes.io/projected/13a3afe5-ef35-4dfc-8842-88a3425a5397-kube-api-access-rlvkk\") pod \"cluster-image-registry-operator-dc59b4c8b-rgnzk\" (UID: \"13a3afe5-ef35-4dfc-8842-88a3425a5397\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.329833 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w749v\" (UniqueName: \"kubernetes.io/projected/a906f419-a9c8-480a-9824-4a9971c6d1ec-kube-api-access-w749v\") pod \"control-plane-machine-set-operator-78cbb6b69f-f6zkr\" (UID: \"a906f419-a9c8-480a-9824-4a9971c6d1ec\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.331022 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.331569 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.342418 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.351017 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75"] Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.351284 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.351404 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxn6\" (UniqueName: \"kubernetes.io/projected/bd280302-59fb-4175-bbd8-f6376ece7337-kube-api-access-mpxn6\") pod \"console-operator-58897d9998-mrd7r\" (UID: \"bd280302-59fb-4175-bbd8-f6376ece7337\") " pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.358302 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44706da4-47de-42d4-a6ad-3ba5016b1b6f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2lsl\" (UID: \"44706da4-47de-42d4-a6ad-3ba5016b1b6f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.361096 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.379298 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.394887 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.400931 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.418330 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.425723 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb"] Nov 23 06:47:46 crc kubenswrapper[4988]: W1123 06:47:46.445126 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod038f34fe_6053_4d30_bea5_a694f4a18cf4.slice/crio-86aab843f9d7e0883334732e35fb422b5dc0b35c4cb6a52f9202f69f0bfd9e45 WatchSource:0}: Error finding container 86aab843f9d7e0883334732e35fb422b5dc0b35c4cb6a52f9202f69f0bfd9e45: Status 404 returned error can't find the container with id 86aab843f9d7e0883334732e35fb422b5dc0b35c4cb6a52f9202f69f0bfd9e45 Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.457009 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.481757 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.502441 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.502805 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.502855 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e955596-7a86-4e71-8a38-6d0d62489a62-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.502891 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.502932 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.002897328 +0000 UTC m=+119.311410091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503060 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrlg\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503229 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcn9s\" (UniqueName: \"kubernetes.io/projected/d419a07d-8155-40cc-a93a-0bcfdc2180f2-kube-api-access-xcn9s\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503248 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e955596-7a86-4e71-8a38-6d0d62489a62-serving-cert\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503307 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503339 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503408 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d419a07d-8155-40cc-a93a-0bcfdc2180f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503429 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.503514 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbwq\" (UniqueName: \"kubernetes.io/projected/0e955596-7a86-4e71-8a38-6d0d62489a62-kube-api-access-xkbwq\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.505664 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.545043 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.565097 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" Nov 23 06:47:46 crc kubenswrapper[4988]: W1123 06:47:46.565677 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e2bfd4a_7d4c_48ab_9985_e8d7fddde747.slice/crio-2abfbb6e4da461396c5b4950f94d46260a531f60a3750a9cac6d6b12dd752e5e WatchSource:0}: Error finding container 2abfbb6e4da461396c5b4950f94d46260a531f60a3750a9cac6d6b12dd752e5e: Status 404 returned error can't find the container with id 2abfbb6e4da461396c5b4950f94d46260a531f60a3750a9cac6d6b12dd752e5e Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.582430 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.606959 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.607246 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.107226047 +0000 UTC m=+119.415738810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607670 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1def0188-18ee-4cd5-b605-d2f2659777ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607708 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbwq\" (UniqueName: \"kubernetes.io/projected/0e955596-7a86-4e71-8a38-6d0d62489a62-kube-api-access-xkbwq\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607730 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-cert\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607761 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607807 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-proxy-tls\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-images\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607868 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607893 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64dn\" (UniqueName: \"kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607917 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wml9k\" (UniqueName: \"kubernetes.io/projected/26860b1c-564f-4683-bc2a-8989f2f3540d-kube-api-access-wml9k\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.607949 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609e5031-3bbd-4270-9fef-a0a665d4b15d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608020 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83ac5b17-6997-4282-b630-c5f94cde0103-proxy-tls\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608042 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608063 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xvrh\" (UniqueName: \"kubernetes.io/projected/ec1ebad2-c865-4b2c-8202-31be32fa43d1-kube-api-access-8xvrh\") pod \"migrator-59844c95c7-l49vf\" (UID: \"ec1ebad2-c865-4b2c-8202-31be32fa43d1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608122 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608149 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pw5\" (UniqueName: \"kubernetes.io/projected/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-kube-api-access-74pw5\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608169 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c332732-3313-4ee2-b0bf-2b7df8100bca-metrics-tls\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608218 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79552\" (UniqueName: \"kubernetes.io/projected/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-kube-api-access-79552\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608240 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-webhook-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608260 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhvlv\" (UniqueName: \"kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608284 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbph\" (UniqueName: \"kubernetes.io/projected/3b2eebef-2026-4500-a84c-6af459ee73ce-kube-api-access-hnbph\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608340 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj6gn\" (UniqueName: \"kubernetes.io/projected/1def0188-18ee-4cd5-b605-d2f2659777ee-kube-api-access-kj6gn\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608407 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8fr\" (UniqueName: \"kubernetes.io/projected/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-kube-api-access-pk8fr\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608459 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrlg\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608492 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcn9s\" (UniqueName: \"kubernetes.io/projected/d419a07d-8155-40cc-a93a-0bcfdc2180f2-kube-api-access-xcn9s\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-mountpoint-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608545 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-srv-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608576 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-service-ca-bundle\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608608 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-node-bootstrap-token\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608628 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-srv-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608648 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd011cae-d14d-480f-ab5e-81678301dbd5-metrics-tls\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608703 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83ac5b17-6997-4282-b630-c5f94cde0103-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608781 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-certs\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608807 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608878 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/26860b1c-564f-4683-bc2a-8989f2f3540d-tmpfs\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.608940 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-socket-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609017 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609044 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mcrz\" (UniqueName: \"kubernetes.io/projected/cd011cae-d14d-480f-ab5e-81678301dbd5-kube-api-access-8mcrz\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609094 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jkct\" (UniqueName: \"kubernetes.io/projected/3c332732-3313-4ee2-b0bf-2b7df8100bca-kube-api-access-4jkct\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609117 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjkq\" (UniqueName: \"kubernetes.io/projected/5fd66e60-c5bd-450f-b70d-07146c9597b1-kube-api-access-ljjkq\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609141 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-metrics-certs\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609178 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609253 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-csi-data-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609308 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-kube-api-access-f4s8f\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609359 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1def0188-18ee-4cd5-b605-d2f2659777ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609383 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/609e5031-3bbd-4270-9fef-a0a665d4b15d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609443 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e955596-7a86-4e71-8a38-6d0d62489a62-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609466 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wg54\" (UniqueName: \"kubernetes.io/projected/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-kube-api-access-6wg54\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609488 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd011cae-d14d-480f-ab5e-81678301dbd5-config-volume\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609525 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609575 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609625 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-registration-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609649 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8d4\" (UniqueName: \"kubernetes.io/projected/96322cb2-41ed-47ab-ad8d-ce82b34ca692-kube-api-access-sd8d4\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609742 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6dh\" (UniqueName: \"kubernetes.io/projected/83ac5b17-6997-4282-b630-c5f94cde0103-kube-api-access-5g6dh\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609769 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e955596-7a86-4e71-8a38-6d0d62489a62-serving-cert\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609789 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-default-certificate\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609809 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbshz\" (UniqueName: \"kubernetes.io/projected/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-kube-api-access-jbshz\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609854 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-serving-cert\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609877 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhsqw\" (UniqueName: \"kubernetes.io/projected/eda40111-df6e-44cc-b820-94ae87fe18b3-kube-api-access-mhsqw\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609912 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-stats-auth\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609934 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609958 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.609994 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-config\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.610051 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.610075 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.610097 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-cabundle\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.610145 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-key\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.610169 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbcj\" (UniqueName: \"kubernetes.io/projected/8f1d1c0d-9530-44b3-bde6-b7176d43928d-kube-api-access-tqbcj\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615279 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615476 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615547 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d419a07d-8155-40cc-a93a-0bcfdc2180f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615583 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-plugins-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615607 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-profile-collector-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.615626 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609e5031-3bbd-4270-9fef-a0a665d4b15d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.626558 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e955596-7a86-4e71-8a38-6d0d62489a62-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.628075 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.633324 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.645311 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.145281607 +0000 UTC m=+119.453794370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.646102 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.650574 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.652061 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.653180 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d419a07d-8155-40cc-a93a-0bcfdc2180f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.656316 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbwq\" (UniqueName: \"kubernetes.io/projected/0e955596-7a86-4e71-8a38-6d0d62489a62-kube-api-access-xkbwq\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.657213 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.658010 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e955596-7a86-4e71-8a38-6d0d62489a62-serving-cert\") pod \"openshift-config-operator-7777fb866f-6p2zp\" (UID: \"0e955596-7a86-4e71-8a38-6d0d62489a62\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.700432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrlg\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716614 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716874 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-stats-auth\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716910 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716937 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716965 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-config\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.716985 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717005 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-cabundle\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717022 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717043 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-key\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717061 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbcj\" (UniqueName: \"kubernetes.io/projected/8f1d1c0d-9530-44b3-bde6-b7176d43928d-kube-api-access-tqbcj\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-profile-collector-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717106 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609e5031-3bbd-4270-9fef-a0a665d4b15d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717142 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-plugins-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717168 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1def0188-18ee-4cd5-b605-d2f2659777ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717209 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-cert\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717227 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717245 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-images\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717264 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-proxy-tls\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717292 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717312 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m64dn\" (UniqueName: \"kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wml9k\" (UniqueName: \"kubernetes.io/projected/26860b1c-564f-4683-bc2a-8989f2f3540d-kube-api-access-wml9k\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609e5031-3bbd-4270-9fef-a0a665d4b15d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717364 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717397 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xvrh\" (UniqueName: \"kubernetes.io/projected/ec1ebad2-c865-4b2c-8202-31be32fa43d1-kube-api-access-8xvrh\") pod \"migrator-59844c95c7-l49vf\" (UID: \"ec1ebad2-c865-4b2c-8202-31be32fa43d1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717420 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83ac5b17-6997-4282-b630-c5f94cde0103-proxy-tls\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717440 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pw5\" (UniqueName: \"kubernetes.io/projected/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-kube-api-access-74pw5\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717457 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c332732-3313-4ee2-b0bf-2b7df8100bca-metrics-tls\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717463 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcn9s\" (UniqueName: \"kubernetes.io/projected/d419a07d-8155-40cc-a93a-0bcfdc2180f2-kube-api-access-xcn9s\") pod \"cluster-samples-operator-665b6dd947-p96cj\" (UID: \"d419a07d-8155-40cc-a93a-0bcfdc2180f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717480 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79552\" (UniqueName: \"kubernetes.io/projected/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-kube-api-access-79552\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnbph\" (UniqueName: \"kubernetes.io/projected/3b2eebef-2026-4500-a84c-6af459ee73ce-kube-api-access-hnbph\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717625 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-webhook-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717640 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhvlv\" (UniqueName: \"kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717665 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj6gn\" (UniqueName: \"kubernetes.io/projected/1def0188-18ee-4cd5-b605-d2f2659777ee-kube-api-access-kj6gn\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717684 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk8fr\" (UniqueName: \"kubernetes.io/projected/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-kube-api-access-pk8fr\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717707 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-mountpoint-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-srv-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717743 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-service-ca-bundle\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717761 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-node-bootstrap-token\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717783 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-srv-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717801 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd011cae-d14d-480f-ab5e-81678301dbd5-metrics-tls\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717824 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83ac5b17-6997-4282-b630-c5f94cde0103-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717844 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717865 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-certs\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717890 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/26860b1c-564f-4683-bc2a-8989f2f3540d-tmpfs\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717910 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-socket-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717939 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mcrz\" (UniqueName: \"kubernetes.io/projected/cd011cae-d14d-480f-ab5e-81678301dbd5-kube-api-access-8mcrz\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717957 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jkct\" (UniqueName: \"kubernetes.io/projected/3c332732-3313-4ee2-b0bf-2b7df8100bca-kube-api-access-4jkct\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717973 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjkq\" (UniqueName: \"kubernetes.io/projected/5fd66e60-c5bd-450f-b70d-07146c9597b1-kube-api-access-ljjkq\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.717991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-metrics-certs\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718029 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-csi-data-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718285 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-kube-api-access-f4s8f\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718312 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1def0188-18ee-4cd5-b605-d2f2659777ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/609e5031-3bbd-4270-9fef-a0a665d4b15d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718347 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd011cae-d14d-480f-ab5e-81678301dbd5-config-volume\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718365 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wg54\" (UniqueName: \"kubernetes.io/projected/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-kube-api-access-6wg54\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718385 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-registration-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718400 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8d4\" (UniqueName: \"kubernetes.io/projected/96322cb2-41ed-47ab-ad8d-ce82b34ca692-kube-api-access-sd8d4\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718419 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6dh\" (UniqueName: \"kubernetes.io/projected/83ac5b17-6997-4282-b630-c5f94cde0103-kube-api-access-5g6dh\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718439 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-default-certificate\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718465 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbshz\" (UniqueName: \"kubernetes.io/projected/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-kube-api-access-jbshz\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhsqw\" (UniqueName: \"kubernetes.io/projected/eda40111-df6e-44cc-b820-94ae87fe18b3-kube-api-access-mhsqw\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.718500 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-serving-cert\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.722110 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83ac5b17-6997-4282-b630-c5f94cde0103-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.723355 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-mountpoint-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.724790 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.725179 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-serving-cert\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.725283 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.224940227 +0000 UTC m=+119.533452990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.727676 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-registration-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.728531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-config\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.729477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd011cae-d14d-480f-ab5e-81678301dbd5-config-volume\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.729987 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.731006 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/609e5031-3bbd-4270-9fef-a0a665d4b15d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.734184 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.736944 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-profile-collector-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.739612 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-csi-data-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.741120 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-proxy-tls\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.742403 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83ac5b17-6997-4282-b630-c5f94cde0103-proxy-tls\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.742508 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-socket-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.742438 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/26860b1c-564f-4683-bc2a-8989f2f3540d-tmpfs\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.742892 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/609e5031-3bbd-4270-9fef-a0a665d4b15d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.743037 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96322cb2-41ed-47ab-ad8d-ce82b34ca692-plugins-dir\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.743420 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.743512 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-apiservice-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.744800 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-images\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.745683 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-cabundle\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.747315 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-metrics-certs\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.748589 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.750174 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1def0188-18ee-4cd5-b605-d2f2659777ee-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.752294 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3b2eebef-2026-4500-a84c-6af459ee73ce-signing-key\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.752588 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.753633 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-cert\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.754934 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-service-ca-bundle\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.755994 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1def0188-18ee-4cd5-b605-d2f2659777ee-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.756277 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.757155 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f1d1c0d-9530-44b3-bde6-b7176d43928d-srv-cert\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.758466 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.758794 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-srv-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.759046 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-certs\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.759482 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fd66e60-c5bd-450f-b70d-07146c9597b1-node-bootstrap-token\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.760208 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-default-certificate\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.760908 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.762118 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c332732-3313-4ee2-b0bf-2b7df8100bca-metrics-tls\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.766592 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-stats-auth\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.768122 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26860b1c-564f-4683-bc2a-8989f2f3540d-webhook-cert\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.770296 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eda40111-df6e-44cc-b820-94ae87fe18b3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.773133 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cd011cae-d14d-480f-ab5e-81678301dbd5-metrics-tls\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.780283 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m64dn\" (UniqueName: \"kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn\") pod \"marketplace-operator-79b997595-jxdnl\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.804818 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhvlv\" (UniqueName: \"kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv\") pod \"collect-profiles-29398005-2rwsp\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.817625 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj6gn\" (UniqueName: \"kubernetes.io/projected/1def0188-18ee-4cd5-b605-d2f2659777ee-kube-api-access-kj6gn\") pod \"kube-storage-version-migrator-operator-b67b599dd-t44t6\" (UID: \"1def0188-18ee-4cd5-b605-d2f2659777ee\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.822410 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.822857 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.322838637 +0000 UTC m=+119.631351400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.844547 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.845221 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.852753 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-b48qm"] Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.865132 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/4e0e97c8-fd27-446b-9bb9-2cc726bf292d-kube-api-access-f4s8f\") pod \"package-server-manager-789f6589d5-kmq57\" (UID: \"4e0e97c8-fd27-446b-9bb9-2cc726bf292d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.865139 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk8fr\" (UniqueName: \"kubernetes.io/projected/9f0d9f44-8d25-4c53-bdb1-cd47c609b17f-kube-api-access-pk8fr\") pod \"service-ca-operator-777779d784-b847l\" (UID: \"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.875503 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.887090 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-j9nr5"] Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.887303 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jkct\" (UniqueName: \"kubernetes.io/projected/3c332732-3313-4ee2-b0bf-2b7df8100bca-kube-api-access-4jkct\") pod \"dns-operator-744455d44c-m24t9\" (UID: \"3c332732-3313-4ee2-b0bf-2b7df8100bca\") " pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.898878 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wml9k\" (UniqueName: \"kubernetes.io/projected/26860b1c-564f-4683-bc2a-8989f2f3540d-kube-api-access-wml9k\") pod \"packageserver-d55dfcdfc-2wd55\" (UID: \"26860b1c-564f-4683-bc2a-8989f2f3540d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.903082 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.909969 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.915668 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.923618 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:46 crc kubenswrapper[4988]: E1123 06:47:46.924471 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.424448379 +0000 UTC m=+119.732961142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.926895 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8d4\" (UniqueName: \"kubernetes.io/projected/96322cb2-41ed-47ab-ad8d-ce82b34ca692-kube-api-access-sd8d4\") pod \"csi-hostpathplugin-sfvqw\" (UID: \"96322cb2-41ed-47ab-ad8d-ce82b34ca692\") " pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.936744 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pw5\" (UniqueName: \"kubernetes.io/projected/9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2-kube-api-access-74pw5\") pod \"multus-admission-controller-857f4d67dd-h7nl4\" (UID: \"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:46 crc kubenswrapper[4988]: W1123 06:47:46.948300 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aec85a9_cf10_4f54_9269_aab56ed0378a.slice/crio-2d435bbe9e31adfd056327d8876e143ae9a479159038a2418f4ff5bc4180f61f WatchSource:0}: Error finding container 2d435bbe9e31adfd056327d8876e143ae9a479159038a2418f4ff5bc4180f61f: Status 404 returned error can't find the container with id 2d435bbe9e31adfd056327d8876e143ae9a479159038a2418f4ff5bc4180f61f Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.956933 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/609e5031-3bbd-4270-9fef-a0a665d4b15d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5fbhd\" (UID: \"609e5031-3bbd-4270-9fef-a0a665d4b15d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.982057 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjkq\" (UniqueName: \"kubernetes.io/projected/5fd66e60-c5bd-450f-b70d-07146c9597b1-kube-api-access-ljjkq\") pod \"machine-config-server-67798\" (UID: \"5fd66e60-c5bd-450f-b70d-07146c9597b1\") " pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:46 crc kubenswrapper[4988]: I1123 06:47:46.995440 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wg54\" (UniqueName: \"kubernetes.io/projected/9f9c1e3e-ac71-4efc-9deb-21379b0996f2-kube-api-access-6wg54\") pod \"ingress-canary-s62tf\" (UID: \"9f9c1e3e-ac71-4efc-9deb-21379b0996f2\") " pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.004810 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.020977 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.024904 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.025475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.025882 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.525862856 +0000 UTC m=+119.834375679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.030531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79552\" (UniqueName: \"kubernetes.io/projected/aa787a9f-6854-4fc6-ab47-0a587c86e7b4-kube-api-access-79552\") pod \"machine-config-operator-74547568cd-6jf4r\" (UID: \"aa787a9f-6854-4fc6-ab47-0a587c86e7b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.036498 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbcj\" (UniqueName: \"kubernetes.io/projected/8f1d1c0d-9530-44b3-bde6-b7176d43928d-kube-api-access-tqbcj\") pod \"catalog-operator-68c6474976-cwwnt\" (UID: \"8f1d1c0d-9530-44b3-bde6-b7176d43928d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.050590 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.063834 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.064692 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.074513 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xvrh\" (UniqueName: \"kubernetes.io/projected/ec1ebad2-c865-4b2c-8202-31be32fa43d1-kube-api-access-8xvrh\") pod \"migrator-59844c95c7-l49vf\" (UID: \"ec1ebad2-c865-4b2c-8202-31be32fa43d1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.082294 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.086761 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.086775 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mcrz\" (UniqueName: \"kubernetes.io/projected/cd011cae-d14d-480f-ab5e-81678301dbd5-kube-api-access-8mcrz\") pod \"dns-default-dzkgh\" (UID: \"cd011cae-d14d-480f-ab5e-81678301dbd5\") " pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.094483 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.101554 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.106684 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnbph\" (UniqueName: \"kubernetes.io/projected/3b2eebef-2026-4500-a84c-6af459ee73ce-kube-api-access-hnbph\") pod \"service-ca-9c57cc56f-76t4n\" (UID: \"3b2eebef-2026-4500-a84c-6af459ee73ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.126707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.127094 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.627079208 +0000 UTC m=+119.935591971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.129006 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbshz\" (UniqueName: \"kubernetes.io/projected/792f32a3-6f0a-4808-8e0d-17e8224d5ae4-kube-api-access-jbshz\") pod \"router-default-5444994796-qm85c\" (UID: \"792f32a3-6f0a-4808-8e0d-17e8224d5ae4\") " pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.129793 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.153099 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6dh\" (UniqueName: \"kubernetes.io/projected/83ac5b17-6997-4282-b630-c5f94cde0103-kube-api-access-5g6dh\") pod \"machine-config-controller-84d6567774-kt56g\" (UID: \"83ac5b17-6997-4282-b630-c5f94cde0103\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.160979 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.174257 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhsqw\" (UniqueName: \"kubernetes.io/projected/eda40111-df6e-44cc-b820-94ae87fe18b3-kube-api-access-mhsqw\") pod \"olm-operator-6b444d44fb-dk4s8\" (UID: \"eda40111-df6e-44cc-b820-94ae87fe18b3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.181976 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6hs4l"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.184309 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gvbhh"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.206997 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.224362 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.228458 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.228994 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.728975168 +0000 UTC m=+120.037487931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.250612 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s62tf" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.259813 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-67798" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.261917 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" event={"ID":"5aec85a9-cf10-4f54-9269-aab56ed0378a","Type":"ContainerStarted","Data":"2d435bbe9e31adfd056327d8876e143ae9a479159038a2418f4ff5bc4180f61f"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.263484 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" event={"ID":"7520c71d-7367-4da7-9554-43cca9b53833","Type":"ContainerStarted","Data":"a2890a7d9fbd3ce3becc22d130976ed5ef54ba4b93ffa7ebddbfdc56151b8f7b"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.281484 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" event={"ID":"038f34fe-6053-4d30-bea5-a694f4a18cf4","Type":"ContainerStarted","Data":"ed0828e080f0f42db3f4b1dc173aa2de4fa9f0a416a6d6142340fe659b204c7a"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.281534 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" event={"ID":"038f34fe-6053-4d30-bea5-a694f4a18cf4","Type":"ContainerStarted","Data":"86aab843f9d7e0883334732e35fb422b5dc0b35c4cb6a52f9202f69f0bfd9e45"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.283580 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" event={"ID":"40399b8a-0985-4070-875a-5e7922b19cc2","Type":"ContainerStarted","Data":"0e541c6a9930d9f2cd2f0641cf2286798066fb6d9b84d79db45b8b3166a7dab7"} Nov 23 06:47:47 crc kubenswrapper[4988]: W1123 06:47:47.296929 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8b34e7_0051_4130_b6eb_289532de720c.slice/crio-5514141deb1b0ea53ab0d902cb4716329ebbf402cef604bb2302fc455d9a4336 WatchSource:0}: Error finding container 5514141deb1b0ea53ab0d902cb4716329ebbf402cef604bb2302fc455d9a4336: Status 404 returned error can't find the container with id 5514141deb1b0ea53ab0d902cb4716329ebbf402cef604bb2302fc455d9a4336 Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.317543 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.317936 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" event={"ID":"27fe19de-ad11-4994-954a-754a1f6f57ae","Type":"ContainerStarted","Data":"b88ba7162a280a31f311ffd27f5d178a6c801095d27f50e81048d4637b91da7d"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.326152 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.333691 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.334038 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.334382 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.834360263 +0000 UTC m=+120.142873016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.334445 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.334755 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.834748452 +0000 UTC m=+120.143261215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.346884 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.360241 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.363336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" event={"ID":"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747","Type":"ContainerStarted","Data":"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.363380 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" event={"ID":"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747","Type":"ContainerStarted","Data":"2abfbb6e4da461396c5b4950f94d46260a531f60a3750a9cac6d6b12dd752e5e"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.363933 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.371506 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gfkdg" event={"ID":"86592f41-a930-4436-96a8-4676e4bbf9bf","Type":"ContainerStarted","Data":"088c0fb85307af24434e2b84d55a84a7569e7be833a56e737144a89412860dbb"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.374716 4988 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dj2p8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.374797 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.389211 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" event={"ID":"dc7ae3d7-2238-4323-b9af-fe2af7ebd3d4","Type":"ContainerStarted","Data":"e181e9dcd8e46543fc6d6e30501f8312da923998a1930ba65c76241ad5d74d0d"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.393395 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" event={"ID":"3296e499-a099-4bc7-b89a-ec155509e956","Type":"ContainerStarted","Data":"73cd9ede1d25eb90229e99b8fc9412353ef72ce2ececa2ccc9ed1a4c3657a7b6"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.393439 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" event={"ID":"3296e499-a099-4bc7-b89a-ec155509e956","Type":"ContainerStarted","Data":"83668cece94b7ba4b0526cd0b9febf6fd05e34bb569eff44c8c60e971c35b8ab"} Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.413666 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.414023 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.416450 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mrd7r"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.426639 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.429294 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.435868 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.436016 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.935992935 +0000 UTC m=+120.244505688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.436517 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.443276 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:47.943254565 +0000 UTC m=+120.251767328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.515500 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-b847l"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.549952 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.551858 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.051829739 +0000 UTC m=+120.360342502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.652275 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.653108 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.153095713 +0000 UTC m=+120.461608476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.693570 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.755469 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.756100 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.256071078 +0000 UTC m=+120.564583841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.771249 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h7nl4"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.790775 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd"] Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.859664 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.860591 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.360578702 +0000 UTC m=+120.669091465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.927957 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" podStartSLOduration=98.927935937 podStartE2EDuration="1m38.927935937s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:47.927440325 +0000 UTC m=+120.235953088" watchObservedRunningTime="2025-11-23 06:47:47.927935937 +0000 UTC m=+120.236448700" Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.960610 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.961082 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.461026005 +0000 UTC m=+120.769538768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:47 crc kubenswrapper[4988]: I1123 06:47:47.961360 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:47 crc kubenswrapper[4988]: E1123 06:47:47.961702 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.461689692 +0000 UTC m=+120.770202455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.033118 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pptgb" podStartSLOduration=99.033085247 podStartE2EDuration="1m39.033085247s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.005838883 +0000 UTC m=+120.314351646" watchObservedRunningTime="2025-11-23 06:47:48.033085247 +0000 UTC m=+120.341598010" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.046400 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57"] Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.070770 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-m24t9"] Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.081411 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.082120 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.582077908 +0000 UTC m=+120.890590671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.082848 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6"] Nov 23 06:47:48 crc kubenswrapper[4988]: W1123 06:47:48.121534 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1def0188_18ee_4cd5_b605_d2f2659777ee.slice/crio-77364fd12803be5fa63edacca85c88cf62d46a7f96b8bf626102fe6b7d96896f WatchSource:0}: Error finding container 77364fd12803be5fa63edacca85c88cf62d46a7f96b8bf626102fe6b7d96896f: Status 404 returned error can't find the container with id 77364fd12803be5fa63edacca85c88cf62d46a7f96b8bf626102fe6b7d96896f Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.182928 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.183790 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.683771302 +0000 UTC m=+120.992284065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.238979 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" podStartSLOduration=99.238957126 podStartE2EDuration="1m39.238957126s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.203646253 +0000 UTC m=+120.512159016" watchObservedRunningTime="2025-11-23 06:47:48.238957126 +0000 UTC m=+120.547469879" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.240828 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" podStartSLOduration=99.240819352 podStartE2EDuration="1m39.240819352s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.240426993 +0000 UTC m=+120.548939766" watchObservedRunningTime="2025-11-23 06:47:48.240819352 +0000 UTC m=+120.549332105" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.284100 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-b8n75" podStartSLOduration=99.284077642 podStartE2EDuration="1m39.284077642s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.282041812 +0000 UTC m=+120.590554585" watchObservedRunningTime="2025-11-23 06:47:48.284077642 +0000 UTC m=+120.592590405" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.285947 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.286610 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.786572284 +0000 UTC m=+121.095085047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.403425 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.403855 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:48.903844493 +0000 UTC m=+121.212357256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.471167 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bd6xn" podStartSLOduration=100.471144687 podStartE2EDuration="1m40.471144687s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.466545653 +0000 UTC m=+120.775058416" watchObservedRunningTime="2025-11-23 06:47:48.471144687 +0000 UTC m=+120.779657450" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.489076 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" event={"ID":"bd280302-59fb-4175-bbd8-f6376ece7337","Type":"ContainerStarted","Data":"9499eef10292a731e8ef90a23f42f698e1110dd99f20964a66f27d00366e40c2"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.544912 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" event={"ID":"27fe19de-ad11-4994-954a-754a1f6f57ae","Type":"ContainerStarted","Data":"912c3cf702498f3ac50fcd7299256f438fb765f0be554aabb0a589c5591e1b58"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.546422 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" event={"ID":"5aec85a9-cf10-4f54-9269-aab56ed0378a","Type":"ContainerStarted","Data":"b3a81aea855bc34b56703f56b60e233bd0e20096ba95e03ec595d854dc9ea8e3"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.546485 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" event={"ID":"5aec85a9-cf10-4f54-9269-aab56ed0378a","Type":"ContainerStarted","Data":"e70aea5bcfe39035b854ba5bb4b78171220cd2ec35dd09dd3c8d05ee5f76627e"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.547074 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.581434 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.081409352 +0000 UTC m=+121.389922115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.581563 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.582177 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.082169271 +0000 UTC m=+121.390682034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.603098 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" event={"ID":"7520c71d-7367-4da7-9554-43cca9b53833","Type":"ContainerStarted","Data":"465c78f8631bb20c7073b86c28c953a4209b2bcec764571cef7e89115c31223c"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.610439 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" event={"ID":"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2","Type":"ContainerStarted","Data":"624a19889867844292f7978096afe62de25edafcd5374ef0c19885971ecb1dbd"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.624875 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6hs4l" event={"ID":"a42d143b-74b0-4ff4-b3c1-6bd59e656461","Type":"ContainerStarted","Data":"73de8fc4676ce164777dca0582bba3284f8ae8ad23294e17ac68e30f37f8577e"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.624939 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6hs4l" event={"ID":"a42d143b-74b0-4ff4-b3c1-6bd59e656461","Type":"ContainerStarted","Data":"7c4fe8fa3bcb8ebdd3924bace274dcbc2f8cc42035a2343993074997b83c28b8"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.627094 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.643490 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" event={"ID":"4e0e97c8-fd27-446b-9bb9-2cc726bf292d","Type":"ContainerStarted","Data":"0a62967c6d7bd3ba066f0863ce7594f343a64bc3dea1c3e28fa841d63a8b0ceb"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.643743 4988 patch_prober.go:28] interesting pod/downloads-7954f5f757-6hs4l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.643778 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6hs4l" podUID="a42d143b-74b0-4ff4-b3c1-6bd59e656461" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.682644 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" event={"ID":"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f","Type":"ContainerStarted","Data":"bd517ac48df8d50d262cde7db6c5362f2bf8d0dcd4b575e5338f4be87459ac76"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.686681 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.687313 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.1872909 +0000 UTC m=+121.495803653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.706493 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qm85c" event={"ID":"792f32a3-6f0a-4808-8e0d-17e8224d5ae4","Type":"ContainerStarted","Data":"98ff780386605ba9d73ee8d4d11fc5a8a223287548e4355a128b7b76d1494965"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.719009 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" event={"ID":"d419a07d-8155-40cc-a93a-0bcfdc2180f2","Type":"ContainerStarted","Data":"648aacfa435a778a8f9c7e11444f4af9ff9083015525c4b830a58e5f745f0f1c"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.733435 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" event={"ID":"1def0188-18ee-4cd5-b605-d2f2659777ee","Type":"ContainerStarted","Data":"77364fd12803be5fa63edacca85c88cf62d46a7f96b8bf626102fe6b7d96896f"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.748651 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" event={"ID":"0e955596-7a86-4e71-8a38-6d0d62489a62","Type":"ContainerStarted","Data":"6afc3e5e856a68012e95a448f2d7fe619227b05322735710a9ae4262f70b99bb"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.760383 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" event={"ID":"13a3afe5-ef35-4dfc-8842-88a3425a5397","Type":"ContainerStarted","Data":"5e6fdad63186c6d00cce102c2c2a45b9c8653509f6c4567cdc42388852d5a5cf"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.760424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" event={"ID":"13a3afe5-ef35-4dfc-8842-88a3425a5397","Type":"ContainerStarted","Data":"aa1cf6c67ee37bde02dcd7d651331d5578491293261853bbf75eb49d207fc143"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.762176 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" event={"ID":"f85d2cce-f57e-4242-8122-5ca62637c30d","Type":"ContainerStarted","Data":"d9bfcaf933b294d5df991952bb9dd21392dafe9b8d61efa2a67ee483548da186"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.786400 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" event={"ID":"44706da4-47de-42d4-a6ad-3ba5016b1b6f","Type":"ContainerStarted","Data":"026b56442758d4a4e6825e7fdd6d0e2b6dc6222ef48728a77ede8fa251efb7ea"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.788050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" event={"ID":"2d8b34e7-0051-4130-b6eb-289532de720c","Type":"ContainerStarted","Data":"5514141deb1b0ea53ab0d902cb4716329ebbf402cef604bb2302fc455d9a4336"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.789569 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.793442 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" podStartSLOduration=99.793429944 podStartE2EDuration="1m39.793429944s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:48.778353091 +0000 UTC m=+121.086865844" watchObservedRunningTime="2025-11-23 06:47:48.793429944 +0000 UTC m=+121.101942707" Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.796439 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.296407028 +0000 UTC m=+121.604919791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.803762 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" event={"ID":"3c332732-3313-4ee2-b0bf-2b7df8100bca","Type":"ContainerStarted","Data":"891a7ac96ee62583376f4dc7e897b57f951ec9bdf19a5bbaf5b81da34e7ccff6"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.812913 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-67798" event={"ID":"5fd66e60-c5bd-450f-b70d-07146c9597b1","Type":"ContainerStarted","Data":"49753880e28526477f5f53c0b80fd2b0a836999332a29ae2fbf4a0540f02cc4e"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.819457 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" event={"ID":"a906f419-a9c8-480a-9824-4a9971c6d1ec","Type":"ContainerStarted","Data":"7621cf9dde48a5291745086dd85dbb030a27c0f7d24c3a7911752e216ca49e06"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.819507 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" event={"ID":"a906f419-a9c8-480a-9824-4a9971c6d1ec","Type":"ContainerStarted","Data":"7a184f91392d8016c154e9d811f1fb0f7aa7f879c360d89bbc770117555424d3"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.843612 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gfkdg" event={"ID":"86592f41-a930-4436-96a8-4676e4bbf9bf","Type":"ContainerStarted","Data":"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.855004 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r"] Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.859350 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" event={"ID":"40399b8a-0985-4070-875a-5e7922b19cc2","Type":"ContainerStarted","Data":"e8c138787b5be403bbf64eba046dab910c68f52f618aaaca0ef45b32770c15c0"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.865746 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" event={"ID":"609e5031-3bbd-4270-9fef-a0a665d4b15d","Type":"ContainerStarted","Data":"a30afee7c93e0c92afbc12e6742358278a5449f1dea2656dedb4a15908886fe9"} Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.881256 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dzkgh"] Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.884323 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt"] Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.890257 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.891169 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.39114565 +0000 UTC m=+121.699658413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.891601 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:48 crc kubenswrapper[4988]: E1123 06:47:48.891877 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.391870548 +0000 UTC m=+121.700383311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.910910 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:47:48 crc kubenswrapper[4988]: I1123 06:47:48.972373 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.014681 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.015149 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.515122945 +0000 UTC m=+121.823635698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.040272 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-76t4n"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.053786 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.082634 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" podStartSLOduration=100.082605473 podStartE2EDuration="1m40.082605473s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.068447383 +0000 UTC m=+121.376960146" watchObservedRunningTime="2025-11-23 06:47:49.082605473 +0000 UTC m=+121.391118236" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.116595 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.119993 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.619977217 +0000 UTC m=+121.928489980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.127929 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.144107 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.181399 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf"] Nov 23 06:47:49 crc kubenswrapper[4988]: W1123 06:47:49.192534 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda40111_df6e_44cc_b820_94ae87fe18b3.slice/crio-6bfe4eb2f3082d6f262a68fbcfe3461c01b15b94d0ae0d8435f39f83445a3b6d WatchSource:0}: Error finding container 6bfe4eb2f3082d6f262a68fbcfe3461c01b15b94d0ae0d8435f39f83445a3b6d: Status 404 returned error can't find the container with id 6bfe4eb2f3082d6f262a68fbcfe3461c01b15b94d0ae0d8435f39f83445a3b6d Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.196605 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sfvqw"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.212451 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s62tf"] Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.223822 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.224077 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.72406129 +0000 UTC m=+122.032574053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.324971 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.325371 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.825358675 +0000 UTC m=+122.133871438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.380239 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6hs4l" podStartSLOduration=100.380221341 podStartE2EDuration="1m40.380221341s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.340995371 +0000 UTC m=+121.649508134" watchObservedRunningTime="2025-11-23 06:47:49.380221341 +0000 UTC m=+121.688734104" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.431337 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.431668 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:49.931652692 +0000 UTC m=+122.240165455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.549237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.550161 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.050146672 +0000 UTC m=+122.358659435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.576662 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gfkdg" podStartSLOduration=100.576635477 podStartE2EDuration="1m40.576635477s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.490750343 +0000 UTC m=+121.799263116" watchObservedRunningTime="2025-11-23 06:47:49.576635477 +0000 UTC m=+121.885148240" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.579953 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbs8b" podStartSLOduration=100.579945478 podStartE2EDuration="1m40.579945478s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.574980686 +0000 UTC m=+121.883493449" watchObservedRunningTime="2025-11-23 06:47:49.579945478 +0000 UTC m=+121.888458241" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.633373 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.633854 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.652805 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.653739 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.153722162 +0000 UTC m=+122.462234925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.684381 4988 patch_prober.go:28] interesting pod/apiserver-76f77b778f-krd2k container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]log ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]etcd ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/max-in-flight-filter ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 23 06:47:49 crc kubenswrapper[4988]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/project.openshift.io-projectcache ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/openshift.io-startinformers ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 23 06:47:49 crc kubenswrapper[4988]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 06:47:49 crc kubenswrapper[4988]: livez check failed Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.684508 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" podUID="ffa18810-f7ea-407d-9bdf-9e2e3ecd2250" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.758520 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.759186 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.259164899 +0000 UTC m=+122.567677652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.816239 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-b48qm" podStartSLOduration=100.816186979 podStartE2EDuration="1m40.816186979s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.802142292 +0000 UTC m=+122.110655045" watchObservedRunningTime="2025-11-23 06:47:49.816186979 +0000 UTC m=+122.124699742" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.852634 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.852707 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.873393 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.873528 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.373508016 +0000 UTC m=+122.682020779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.873812 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.874156 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.374146992 +0000 UTC m=+122.682659755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.900093 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" event={"ID":"aa787a9f-6854-4fc6-ab47-0a587c86e7b4","Type":"ContainerStarted","Data":"29f562416b09fd6b68644e8f2f0f573808527ff457e3ab58f0cff84228fdf6e4"} Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.900614 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-j9nr5" podStartSLOduration=100.900585405 podStartE2EDuration="1m40.900585405s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.837400813 +0000 UTC m=+122.145913576" watchObservedRunningTime="2025-11-23 06:47:49.900585405 +0000 UTC m=+122.209098158" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.901004 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" event={"ID":"96322cb2-41ed-47ab-ad8d-ce82b34ca692","Type":"ContainerStarted","Data":"968b7d3175cba144bbd138363e1adf725e9c0d653af4f2ac4c5876fe81e99d4e"} Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.910489 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" event={"ID":"2d8b34e7-0051-4130-b6eb-289532de720c","Type":"ContainerStarted","Data":"e5a3897c5de60e884ec70c16362ec2347340e45fd7cd6955ea53348cd41466a2"} Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.913392 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f6zkr" podStartSLOduration=100.913360391 podStartE2EDuration="1m40.913360391s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.910137782 +0000 UTC m=+122.218650555" watchObservedRunningTime="2025-11-23 06:47:49.913360391 +0000 UTC m=+122.221873154" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.924307 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.959859 4988 generic.go:334] "Generic (PLEG): container finished" podID="0e955596-7a86-4e71-8a38-6d0d62489a62" containerID="f79fd7b5fb3b5df724ed1c1e48a908ee7711776fb7d1055a75a4d813c075e147" exitCode=0 Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.960025 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" event={"ID":"0e955596-7a86-4e71-8a38-6d0d62489a62","Type":"ContainerDied","Data":"f79fd7b5fb3b5df724ed1c1e48a908ee7711776fb7d1055a75a4d813c075e147"} Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.960911 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-fnvcf" podStartSLOduration=100.960883686 podStartE2EDuration="1m40.960883686s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:49.950482189 +0000 UTC m=+122.258994962" watchObservedRunningTime="2025-11-23 06:47:49.960883686 +0000 UTC m=+122.269396449" Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.969811 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" event={"ID":"9f0d9f44-8d25-4c53-bdb1-cd47c609b17f","Type":"ContainerStarted","Data":"8aed4f9500bd910da851e66017958c019ef5b8786e31f763cb77747cebde1411"} Nov 23 06:47:49 crc kubenswrapper[4988]: I1123 06:47:49.978478 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:49 crc kubenswrapper[4988]: E1123 06:47:49.979777 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.479760343 +0000 UTC m=+122.788273106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.026372 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rgnzk" podStartSLOduration=101.026357865 podStartE2EDuration="1m41.026357865s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.025123334 +0000 UTC m=+122.333636107" watchObservedRunningTime="2025-11-23 06:47:50.026357865 +0000 UTC m=+122.334870628" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.081580 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" event={"ID":"bd280302-59fb-4175-bbd8-f6376ece7337","Type":"ContainerStarted","Data":"1a052e656102731189f50609f10131606ecb615c66ebdafcae0ac795f48768b6"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.082341 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.083644 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.085520 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.585504497 +0000 UTC m=+122.894017260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.092734 4988 patch_prober.go:28] interesting pod/console-operator-58897d9998-mrd7r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.092826 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" podUID="bd280302-59fb-4175-bbd8-f6376ece7337" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.138122 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" podStartSLOduration=101.138102167 podStartE2EDuration="1m41.138102167s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.137645956 +0000 UTC m=+122.446158739" watchObservedRunningTime="2025-11-23 06:47:50.138102167 +0000 UTC m=+122.446614930" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.140354 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzkgh" event={"ID":"cd011cae-d14d-480f-ab5e-81678301dbd5","Type":"ContainerStarted","Data":"9b58796ca92d491087e440c2ab47eb32ff1573766ea8892b6cb86bd2f29007c8"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.165542 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" event={"ID":"1def0188-18ee-4cd5-b605-d2f2659777ee","Type":"ContainerStarted","Data":"12710640ffc6dcc7e5880cc30e637f81b45046839b245a8bdf9d2439d00c13d4"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.174097 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-b847l" podStartSLOduration=101.174078736 podStartE2EDuration="1m41.174078736s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.169920733 +0000 UTC m=+122.478433496" watchObservedRunningTime="2025-11-23 06:47:50.174078736 +0000 UTC m=+122.482591499" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.174381 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s62tf" event={"ID":"9f9c1e3e-ac71-4efc-9deb-21379b0996f2","Type":"ContainerStarted","Data":"a7ad8ad6a6db6a8dfabb5f74727252b2bf8232768541aa4e9c2ff9e399dfc12e"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.188722 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.188805 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.688788469 +0000 UTC m=+122.997301232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.189177 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.190042 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.69002794 +0000 UTC m=+122.998540703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.239389 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" event={"ID":"26860b1c-564f-4683-bc2a-8989f2f3540d","Type":"ContainerStarted","Data":"b90382fe597e216292a932f66fad6b58209c0f83429b8b25987963510bc70063"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.240974 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.251855 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" event={"ID":"4e0e97c8-fd27-446b-9bb9-2cc726bf292d","Type":"ContainerStarted","Data":"c4169c1b6df6eeb88dc0c7038b3e31f955192c749860a1ca5d8d30cf195e88d9"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.255837 4988 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2wd55 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.255909 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" podUID="26860b1c-564f-4683-bc2a-8989f2f3540d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.271871 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" event={"ID":"83ac5b17-6997-4282-b630-c5f94cde0103","Type":"ContainerStarted","Data":"5e5c25c2a392bfe54c84f91aaf7671c2a1b68c7e83cdc449f67eb969a1e59688"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.275417 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" event={"ID":"609e5031-3bbd-4270-9fef-a0a665d4b15d","Type":"ContainerStarted","Data":"ea234c3557744c347a53fdccccbcd04240d168035a022638b7b9a5cf8901446e"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.281436 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" event={"ID":"d419a07d-8155-40cc-a93a-0bcfdc2180f2","Type":"ContainerStarted","Data":"f2728858393e663b57f34004cd9926a0a4f41e5a9fd839eba47ea60bd16a8cee"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.290354 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" event={"ID":"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2","Type":"ContainerStarted","Data":"1aeffbb285bc4e8d9cc17b14d9cad0e00878e2b4f698616a0992c6e901feb107"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.290562 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.290910 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.790893654 +0000 UTC m=+123.099406417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.293782 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" event={"ID":"8f1d1c0d-9530-44b3-bde6-b7176d43928d","Type":"ContainerStarted","Data":"b8db55a2a548f0e47b97e00880ea8040d29fd168462fd2a7a4326b6b78186b52"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.310095 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" event={"ID":"ec1ebad2-c865-4b2c-8202-31be32fa43d1","Type":"ContainerStarted","Data":"aa7d5dd62d5cc30443ee0135218f8426c319379de912cf1be6fe94a0703cc4e2"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.312125 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gvbhh" podStartSLOduration=101.312088848 podStartE2EDuration="1m41.312088848s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.289833487 +0000 UTC m=+122.598346250" watchObservedRunningTime="2025-11-23 06:47:50.312088848 +0000 UTC m=+122.620601601" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.330847 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" podStartSLOduration=101.330818591 podStartE2EDuration="1m41.330818591s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.320703931 +0000 UTC m=+122.629216694" watchObservedRunningTime="2025-11-23 06:47:50.330818591 +0000 UTC m=+122.639331374" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.348122 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-67798" event={"ID":"5fd66e60-c5bd-450f-b70d-07146c9597b1","Type":"ContainerStarted","Data":"a498dc44a5c1015a59163fe64162ca638fd4038bccc2f0fafd543e78fa4d75d2"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.361211 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-t44t6" podStartSLOduration=101.361169351 podStartE2EDuration="1m41.361169351s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.357615753 +0000 UTC m=+122.666128526" watchObservedRunningTime="2025-11-23 06:47:50.361169351 +0000 UTC m=+122.669682114" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.386055 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" event={"ID":"44706da4-47de-42d4-a6ad-3ba5016b1b6f","Type":"ContainerStarted","Data":"b54e853635ac65528aeda067f6c2d6b3b8b0eae16c1198842a96c0961e6c4d1b"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.386102 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" event={"ID":"44706da4-47de-42d4-a6ad-3ba5016b1b6f","Type":"ContainerStarted","Data":"315a8af4cf1991d83dce98c4fe89adeab0d7bc646694d26f92f8e7046765590b"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.393767 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.395406 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.895387447 +0000 UTC m=+123.203900210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.416331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qm85c" event={"ID":"792f32a3-6f0a-4808-8e0d-17e8224d5ae4","Type":"ContainerStarted","Data":"44fa61df6f09b3dc1e77ae123718987c77f4736318fc19644cd94becaf651c28"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.450978 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5fbhd" podStartSLOduration=101.450949311 podStartE2EDuration="1m41.450949311s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.423965033 +0000 UTC m=+122.732477796" watchObservedRunningTime="2025-11-23 06:47:50.450949311 +0000 UTC m=+122.759462074" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.455336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" event={"ID":"037989f7-21fc-4899-b869-b0ebfbe70cd6","Type":"ContainerStarted","Data":"33c6312775f402fad94edc5bef341b7d762dbb46ab72ad5c8222f04fae41a844"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.463367 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2lsl" podStartSLOduration=101.463333017 podStartE2EDuration="1m41.463333017s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.449877914 +0000 UTC m=+122.758390677" watchObservedRunningTime="2025-11-23 06:47:50.463333017 +0000 UTC m=+122.771845770" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.463567 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" event={"ID":"eda40111-df6e-44cc-b820-94ae87fe18b3","Type":"ContainerStarted","Data":"6bfe4eb2f3082d6f262a68fbcfe3461c01b15b94d0ae0d8435f39f83445a3b6d"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.464681 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" event={"ID":"3b2eebef-2026-4500-a84c-6af459ee73ce","Type":"ContainerStarted","Data":"787f6cffcf5c036f57cd8860bbc1a57113775ac067d23050613be42d7bc666a0"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.477106 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" event={"ID":"3c332732-3313-4ee2-b0bf-2b7df8100bca","Type":"ContainerStarted","Data":"ecb42f4ca2aea902ef687a0b45e5cee470b80bd3a39e1de34284a2ab46be432e"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.484499 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-67798" podStartSLOduration=6.484478349 podStartE2EDuration="6.484478349s" podCreationTimestamp="2025-11-23 06:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.482741576 +0000 UTC m=+122.791254350" watchObservedRunningTime="2025-11-23 06:47:50.484478349 +0000 UTC m=+122.792991102" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.497044 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.497357 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.997321227 +0000 UTC m=+123.305833990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.497929 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.498990 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:50.998982058 +0000 UTC m=+123.307494821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.511988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" event={"ID":"f85d2cce-f57e-4242-8122-5ca62637c30d","Type":"ContainerStarted","Data":"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9"} Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.512029 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.520357 4988 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jxdnl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.520415 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.520735 4988 patch_prober.go:28] interesting pod/downloads-7954f5f757-6hs4l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.520754 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6hs4l" podUID="a42d143b-74b0-4ff4-b3c1-6bd59e656461" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.535924 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qm85c" podStartSLOduration=101.535899501 podStartE2EDuration="1m41.535899501s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.52212802 +0000 UTC m=+122.830640783" watchObservedRunningTime="2025-11-23 06:47:50.535899501 +0000 UTC m=+122.844412264" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.536233 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r4rmr" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.553927 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" podStartSLOduration=101.553904526 podStartE2EDuration="1m41.553904526s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:50.55164778 +0000 UTC m=+122.860160543" watchObservedRunningTime="2025-11-23 06:47:50.553904526 +0000 UTC m=+122.862417319" Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.600177 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.600371 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.100343564 +0000 UTC m=+123.408856327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.600778 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.601168 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.101157334 +0000 UTC m=+123.409670097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.702851 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.703047 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.203019722 +0000 UTC m=+123.511532485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.703824 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.706924 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.206909018 +0000 UTC m=+123.515421781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.814388 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.815078 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.315064222 +0000 UTC m=+123.623576985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:50 crc kubenswrapper[4988]: I1123 06:47:50.922885 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:50 crc kubenswrapper[4988]: E1123 06:47:50.923257 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.423239247 +0000 UTC m=+123.731752140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.025162 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.025563 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.525541076 +0000 UTC m=+123.834053839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.127058 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.127491 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.627473796 +0000 UTC m=+123.935986559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.229137 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.229352 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.729320764 +0000 UTC m=+124.037833527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.330766 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.331087 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.831074189 +0000 UTC m=+124.139586952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.348028 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.353795 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:51 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:51 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:51 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.353862 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.431770 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.431951 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.931925992 +0000 UTC m=+124.240438755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.432000 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.432415 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:51.932397684 +0000 UTC m=+124.240910447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.533937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.534115 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.034087488 +0000 UTC m=+124.342600251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.534420 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.534689 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.034683113 +0000 UTC m=+124.343195876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.542849 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" event={"ID":"26860b1c-564f-4683-bc2a-8989f2f3540d","Type":"ContainerStarted","Data":"9cc8e2be4cedcf3bd1f47f2bb13c5c78aea3a93ec6ec9c39c9e423e492e798bb"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.560624 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" event={"ID":"aa787a9f-6854-4fc6-ab47-0a587c86e7b4","Type":"ContainerStarted","Data":"2654793ee55f3e6b5ce9cd8956e1e80a1803c95dddf95f3fd2a06b6dab8f12fc"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.560670 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" event={"ID":"aa787a9f-6854-4fc6-ab47-0a587c86e7b4","Type":"ContainerStarted","Data":"b7bbcfd5f15c1d8124a3842fc66413cb213a93fb44463b3432160313a82af39f"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.572278 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzkgh" event={"ID":"cd011cae-d14d-480f-ab5e-81678301dbd5","Type":"ContainerStarted","Data":"c158be86696ae3f1b5647ef872ed82ce512b062fe4400a21bf40c0c3bedbe1ef"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.572323 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dzkgh" event={"ID":"cd011cae-d14d-480f-ab5e-81678301dbd5","Type":"ContainerStarted","Data":"42443779a162cfed2b4d6ed64b4649e52ccebc029f223cd5aa5a903117fc0df1"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.573021 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dzkgh" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.575238 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" event={"ID":"8f1d1c0d-9530-44b3-bde6-b7176d43928d","Type":"ContainerStarted","Data":"aa1bbb7287b2cd02ca4593ad7b23860e1bc8d6deb22da4e0c38c4dcfb1e5e1ee"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.575964 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.577433 4988 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-cwwnt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.577469 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" podUID="8f1d1c0d-9530-44b3-bde6-b7176d43928d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.579113 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" event={"ID":"83ac5b17-6997-4282-b630-c5f94cde0103","Type":"ContainerStarted","Data":"c887ab2bb0a80d6f8224d8eeb422831a924a4739ea067ec0e4e1d3d2f978fc79"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.579141 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" event={"ID":"83ac5b17-6997-4282-b630-c5f94cde0103","Type":"ContainerStarted","Data":"4c4a25a673343e42ef407ab0f86169975526f298c34028ef9f09018cc9255aa0"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.605484 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" event={"ID":"3c332732-3313-4ee2-b0bf-2b7df8100bca","Type":"ContainerStarted","Data":"79cc573069f7206a71c16e1949a124ac7f776e05e0d841f3d99ac57391201e4d"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.607588 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s62tf" event={"ID":"9f9c1e3e-ac71-4efc-9deb-21379b0996f2","Type":"ContainerStarted","Data":"0f35adeff63c25a68f96a6e0dd9a5523e67bac8a742d4a47a3c394fb5ec56677"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.627449 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" event={"ID":"4e0e97c8-fd27-446b-9bb9-2cc726bf292d","Type":"ContainerStarted","Data":"fdb37c03990446ecbb668ff495601372bc06db6c4e8e9aa8909f02015c6a82f9"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.628309 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.636519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" event={"ID":"ec1ebad2-c865-4b2c-8202-31be32fa43d1","Type":"ContainerStarted","Data":"eea0d890d0aec852990eb5789561f2e6ff0783be7883513eba5cec950017b5dd"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.636562 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" event={"ID":"ec1ebad2-c865-4b2c-8202-31be32fa43d1","Type":"ContainerStarted","Data":"2d9355ad2bff2e849213b9f1c5d9bee04c5de0f588ed6ab5b5ed1bf7ce12cb4a"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.637135 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.637374 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.137357761 +0000 UTC m=+124.445870524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.637438 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.639519 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.139503604 +0000 UTC m=+124.448016367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.656745 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6jf4r" podStartSLOduration=102.65672766 podStartE2EDuration="1m42.65672766s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.61225144 +0000 UTC m=+123.920764203" watchObservedRunningTime="2025-11-23 06:47:51.65672766 +0000 UTC m=+123.965240423" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.676648 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" event={"ID":"9c3ae6b7-09e0-4e32-b6a6-a8d5fb0c64c2","Type":"ContainerStarted","Data":"71e7c8ea35b4c5adc2406bd3ee8e0ce7704f5dae3369b8c110a2c36fd43eaf64"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.699577 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" event={"ID":"96322cb2-41ed-47ab-ad8d-ce82b34ca692","Type":"ContainerStarted","Data":"b6a0969fa0ab36eaa8aa33d1278ca7c9e930b28f9387c58fbca8fd20dcca592d"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.715251 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s62tf" podStartSLOduration=8.715231716 podStartE2EDuration="8.715231716s" podCreationTimestamp="2025-11-23 06:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.668665745 +0000 UTC m=+123.977178508" watchObservedRunningTime="2025-11-23 06:47:51.715231716 +0000 UTC m=+124.023744479" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.715953 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-m24t9" podStartSLOduration=102.715948364 podStartE2EDuration="1m42.715948364s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.713840552 +0000 UTC m=+124.022353335" watchObservedRunningTime="2025-11-23 06:47:51.715948364 +0000 UTC m=+124.024461127" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.722586 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" event={"ID":"eda40111-df6e-44cc-b820-94ae87fe18b3","Type":"ContainerStarted","Data":"544d094879cc271d27fe01a3590cea7ded97c37fc9da64d920d4e6d7f9c7be3a"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.723444 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.725023 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" event={"ID":"3b2eebef-2026-4500-a84c-6af459ee73ce","Type":"ContainerStarted","Data":"7be0a1b6306f1b2ae75f87d439e64e29f6d7335f0a9fb7ea9764e41ec7d21f65"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.725463 4988 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dk4s8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.725491 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" podUID="eda40111-df6e-44cc-b820-94ae87fe18b3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.726928 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" event={"ID":"0e955596-7a86-4e71-8a38-6d0d62489a62","Type":"ContainerStarted","Data":"a85f6e744bec47a045b4b374293d1e0d8d88bab00e33cf984732014646b0f9b2"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.727263 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.728025 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" event={"ID":"037989f7-21fc-4899-b869-b0ebfbe70cd6","Type":"ContainerStarted","Data":"e88b8a74b1b5b0266ed221663b5d12f72bb9d0e2b403878d782ee60acfdf23dd"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.752830 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.754344 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.254326423 +0000 UTC m=+124.562839186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.764053 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" event={"ID":"d419a07d-8155-40cc-a93a-0bcfdc2180f2","Type":"ContainerStarted","Data":"e198811ae1dcbb38f88b45bbd25dcc84f927e7589f56ee9c295d7025330a9538"} Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.766357 4988 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jxdnl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.766389 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.786634 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mrd7r" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.803655 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt56g" podStartSLOduration=102.803637012 podStartE2EDuration="1m42.803637012s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.801835387 +0000 UTC m=+124.110348150" watchObservedRunningTime="2025-11-23 06:47:51.803637012 +0000 UTC m=+124.112149775" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.854911 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.856022 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.356000176 +0000 UTC m=+124.664512939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.860458 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dzkgh" podStartSLOduration=8.860446126 podStartE2EDuration="8.860446126s" podCreationTimestamp="2025-11-23 06:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.858355865 +0000 UTC m=+124.166868628" watchObservedRunningTime="2025-11-23 06:47:51.860446126 +0000 UTC m=+124.168958889" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.893501 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" podStartSLOduration=102.893482603 podStartE2EDuration="1m42.893482603s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.890766216 +0000 UTC m=+124.199278979" watchObservedRunningTime="2025-11-23 06:47:51.893482603 +0000 UTC m=+124.201995356" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.931664 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" podStartSLOduration=102.931648546 podStartE2EDuration="1m42.931648546s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.928395456 +0000 UTC m=+124.236908219" watchObservedRunningTime="2025-11-23 06:47:51.931648546 +0000 UTC m=+124.240161309" Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.955628 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:51 crc kubenswrapper[4988]: E1123 06:47:51.957225 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.457211098 +0000 UTC m=+124.765723861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:51 crc kubenswrapper[4988]: I1123 06:47:51.998511 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-76t4n" podStartSLOduration=102.998497709 podStartE2EDuration="1m42.998497709s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:51.996635673 +0000 UTC m=+124.305148436" watchObservedRunningTime="2025-11-23 06:47:51.998497709 +0000 UTC m=+124.307010472" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.043107 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h7nl4" podStartSLOduration=103.043091082 podStartE2EDuration="1m43.043091082s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.041678327 +0000 UTC m=+124.350191090" watchObservedRunningTime="2025-11-23 06:47:52.043091082 +0000 UTC m=+124.351603845" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.057229 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.057514 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.557503168 +0000 UTC m=+124.866015931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.125696 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" podStartSLOduration=103.125681613 podStartE2EDuration="1m43.125681613s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.083505801 +0000 UTC m=+124.392018564" watchObservedRunningTime="2025-11-23 06:47:52.125681613 +0000 UTC m=+124.434194376" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.127114 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-p96cj" podStartSLOduration=103.127109499 podStartE2EDuration="1m43.127109499s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.124577406 +0000 UTC m=+124.433090169" watchObservedRunningTime="2025-11-23 06:47:52.127109499 +0000 UTC m=+124.435622262" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.148106 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" podStartSLOduration=103.148087297 podStartE2EDuration="1m43.148087297s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.147515313 +0000 UTC m=+124.456028076" watchObservedRunningTime="2025-11-23 06:47:52.148087297 +0000 UTC m=+124.456600060" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.159133 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.159532 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.65951862 +0000 UTC m=+124.968031383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.163806 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2wd55" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.180477 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" podStartSLOduration=103.180457437 podStartE2EDuration="1m43.180457437s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.179536485 +0000 UTC m=+124.488049248" watchObservedRunningTime="2025-11-23 06:47:52.180457437 +0000 UTC m=+124.488970200" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.231684 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l49vf" podStartSLOduration=103.231668583 podStartE2EDuration="1m43.231668583s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:52.231091189 +0000 UTC m=+124.539603972" watchObservedRunningTime="2025-11-23 06:47:52.231668583 +0000 UTC m=+124.540181346" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.261905 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.262250 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.762239379 +0000 UTC m=+125.070752142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.355058 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:52 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:52 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:52 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.355117 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.363407 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.363821 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.86380735 +0000 UTC m=+125.172320113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.465372 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.465783 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:52.965766041 +0000 UTC m=+125.274278804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.566138 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.566364 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.066336427 +0000 UTC m=+125.374849190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.566549 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.566855 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.06684312 +0000 UTC m=+125.375355883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.647843 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.648952 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.652958 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.653512 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.660079 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.667642 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.667784 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.167764505 +0000 UTC m=+125.476277268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.667934 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.668241 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.168234306 +0000 UTC m=+125.476747069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.769374 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.769602 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.269575752 +0000 UTC m=+125.578088515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.769985 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.770025 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.770081 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.770627 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.270606427 +0000 UTC m=+125.579119190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.792153 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" event={"ID":"96322cb2-41ed-47ab-ad8d-ce82b34ca692","Type":"ContainerStarted","Data":"39a817bfa58cb5e741579d39b18f931ca7820de45c29c9ee8dbd381215e2c9f3"} Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.804583 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-cwwnt" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.813231 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dk4s8" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.872755 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.873042 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.372997289 +0000 UTC m=+125.681510042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.873291 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.873327 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.873457 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.873992 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.874214 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.374176418 +0000 UTC m=+125.682689181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.900121 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.963860 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.974852 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.975239 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.475176105 +0000 UTC m=+125.783688868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:52 crc kubenswrapper[4988]: I1123 06:47:52.975800 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:52 crc kubenswrapper[4988]: E1123 06:47:52.976156 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.476149029 +0000 UTC m=+125.784661792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.077092 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.077364 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.57733543 +0000 UTC m=+125.885848193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.077966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.078524 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.578508169 +0000 UTC m=+125.887020932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.193086 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.193540 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.693523693 +0000 UTC m=+126.002036456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.297563 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.297951 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.797934274 +0000 UTC m=+126.106447037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.352362 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:53 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:53 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:53 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.352419 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.369113 4988 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.399504 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.399869 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:53.899850193 +0000 UTC m=+126.208362946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.501364 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.501780 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:54.001761063 +0000 UTC m=+126.310273826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.549848 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:47:53 crc kubenswrapper[4988]: W1123 06:47:53.555807 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8b6131da_5781_4dc5_89ec_b21c774bc3e4.slice/crio-eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d WatchSource:0}: Error finding container eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d: Status 404 returned error can't find the container with id eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.602663 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.602860 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:47:54.102831582 +0000 UTC m=+126.411344345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.603067 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: E1123 06:47:53.603394 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:47:54.103386095 +0000 UTC m=+126.411898858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-c89ht" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.663791 4988 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T06:47:53.369135994Z","Handler":null,"Name":""} Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.666872 4988 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.666900 4988 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.703931 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.709708 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.799384 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" event={"ID":"96322cb2-41ed-47ab-ad8d-ce82b34ca692","Type":"ContainerStarted","Data":"89cc7a372dea00489a656e8a240543dbd6f9709eb0fccdf2c3a71f462b1994ef"} Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.799752 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" event={"ID":"96322cb2-41ed-47ab-ad8d-ce82b34ca692","Type":"ContainerStarted","Data":"afabfdd1cd4b86d840bbed04267ba52714ffd0ec0f46fbb6d29ca58a4df4e6bd"} Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.801037 4988 generic.go:334] "Generic (PLEG): container finished" podID="037989f7-21fc-4899-b869-b0ebfbe70cd6" containerID="e88b8a74b1b5b0266ed221663b5d12f72bb9d0e2b403878d782ee60acfdf23dd" exitCode=0 Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.801115 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" event={"ID":"037989f7-21fc-4899-b869-b0ebfbe70cd6","Type":"ContainerDied","Data":"e88b8a74b1b5b0266ed221663b5d12f72bb9d0e2b403878d782ee60acfdf23dd"} Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.802601 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b6131da-5781-4dc5-89ec-b21c774bc3e4","Type":"ContainerStarted","Data":"eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d"} Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.804711 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.808417 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.808469 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.810532 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6p2zp" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.841415 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-c89ht\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.889522 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.897276 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.947476 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.948735 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.954657 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:47:53 crc kubenswrapper[4988]: I1123 06:47:53.974962 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.116840 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.117756 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.119953 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.120178 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.120357 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.120386 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdmg\" (UniqueName: \"kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.127672 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.190289 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:47:54 crc kubenswrapper[4988]: W1123 06:47:54.205391 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8693d0cd_897f_4bef_a923_783f1bf8c584.slice/crio-fe5d58dc68c5f407bc14d49ac4b905c4488cba795137889844bd8e76e594d8c5 WatchSource:0}: Error finding container fe5d58dc68c5f407bc14d49ac4b905c4488cba795137889844bd8e76e594d8c5: Status 404 returned error can't find the container with id fe5d58dc68c5f407bc14d49ac4b905c4488cba795137889844bd8e76e594d8c5 Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221756 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221798 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdmg\" (UniqueName: \"kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221854 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.221881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgg5f\" (UniqueName: \"kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.222125 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.222307 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.244427 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdmg\" (UniqueName: \"kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg\") pod \"community-operators-gcsbm\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.268648 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.319074 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.327319 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.327401 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.327435 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgg5f\" (UniqueName: \"kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.328316 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.328550 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.328865 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.335690 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.345526 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgg5f\" (UniqueName: \"kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f\") pod \"certified-operators-5gwgn\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.356639 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:54 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:54 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:54 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.356730 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.428157 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.428228 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.428337 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhpz\" (UniqueName: \"kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.448025 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.483929 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:47:54 crc kubenswrapper[4988]: W1123 06:47:54.496855 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4d3d52_3334_454d_8ea9_3acc065a17b3.slice/crio-09c0674125028dfbc0558eba764d4e1b286fbdc8fea0b36b87b73debf4499625 WatchSource:0}: Error finding container 09c0674125028dfbc0558eba764d4e1b286fbdc8fea0b36b87b73debf4499625: Status 404 returned error can't find the container with id 09c0674125028dfbc0558eba764d4e1b286fbdc8fea0b36b87b73debf4499625 Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.505580 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.515659 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.516650 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.527867 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.529591 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.529639 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.529723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhpz\" (UniqueName: \"kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.530468 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.530755 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.565563 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhpz\" (UniqueName: \"kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz\") pod \"community-operators-phks7\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.631277 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.631361 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.631389 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxrk\" (UniqueName: \"kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.640502 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.645586 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-krd2k" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.666175 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.680154 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:47:54 crc kubenswrapper[4988]: W1123 06:47:54.726980 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20a119bf_0e89_4c4b_8502_bf5d7759a95d.slice/crio-4eea5382c1165642ceee879c657f864f2fe0fcdb4721fb6d7408a2df1a4f5b54 WatchSource:0}: Error finding container 4eea5382c1165642ceee879c657f864f2fe0fcdb4721fb6d7408a2df1a4f5b54: Status 404 returned error can't find the container with id 4eea5382c1165642ceee879c657f864f2fe0fcdb4721fb6d7408a2df1a4f5b54 Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.732986 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.733051 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.733076 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxrk\" (UniqueName: \"kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.734667 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.735258 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.749552 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxrk\" (UniqueName: \"kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk\") pod \"certified-operators-4ft6r\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.827021 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerStarted","Data":"4eea5382c1165642ceee879c657f864f2fe0fcdb4721fb6d7408a2df1a4f5b54"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.829482 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" event={"ID":"8693d0cd-897f-4bef-a923-783f1bf8c584","Type":"ContainerStarted","Data":"3f6397b885a9fb373eff14f67bf8813dd8b47b812c941c9f93e99a618b267be0"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.829587 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" event={"ID":"8693d0cd-897f-4bef-a923-783f1bf8c584","Type":"ContainerStarted","Data":"fe5d58dc68c5f407bc14d49ac4b905c4488cba795137889844bd8e76e594d8c5"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.829621 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.831072 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.832972 4988 generic.go:334] "Generic (PLEG): container finished" podID="8b6131da-5781-4dc5-89ec-b21c774bc3e4" containerID="6f828f4c433eb531a53bba3bb14d10cae62ffa43a57ea57a680efe190eb981f4" exitCode=0 Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.833044 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b6131da-5781-4dc5-89ec-b21c774bc3e4","Type":"ContainerDied","Data":"6f828f4c433eb531a53bba3bb14d10cae62ffa43a57ea57a680efe190eb981f4"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.848633 4988 generic.go:334] "Generic (PLEG): container finished" podID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerID="77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54" exitCode=0 Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.848820 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerDied","Data":"77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.848872 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerStarted","Data":"09c0674125028dfbc0558eba764d4e1b286fbdc8fea0b36b87b73debf4499625"} Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.858126 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.862243 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" podStartSLOduration=105.862209595 podStartE2EDuration="1m45.862209595s" podCreationTimestamp="2025-11-23 06:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:54.848350772 +0000 UTC m=+127.156863535" watchObservedRunningTime="2025-11-23 06:47:54.862209595 +0000 UTC m=+127.170722358" Nov 23 06:47:54 crc kubenswrapper[4988]: I1123 06:47:54.894359 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sfvqw" podStartSLOduration=11.894332869 podStartE2EDuration="11.894332869s" podCreationTimestamp="2025-11-23 06:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:54.888566597 +0000 UTC m=+127.197079370" watchObservedRunningTime="2025-11-23 06:47:54.894332869 +0000 UTC m=+127.202845622" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.214306 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.240592 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.341996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhvlv\" (UniqueName: \"kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv\") pod \"037989f7-21fc-4899-b869-b0ebfbe70cd6\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.342461 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume\") pod \"037989f7-21fc-4899-b869-b0ebfbe70cd6\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.342552 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume\") pod \"037989f7-21fc-4899-b869-b0ebfbe70cd6\" (UID: \"037989f7-21fc-4899-b869-b0ebfbe70cd6\") " Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.349058 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "037989f7-21fc-4899-b869-b0ebfbe70cd6" (UID: "037989f7-21fc-4899-b869-b0ebfbe70cd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.351874 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "037989f7-21fc-4899-b869-b0ebfbe70cd6" (UID: "037989f7-21fc-4899-b869-b0ebfbe70cd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.352375 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv" (OuterVolumeSpecName: "kube-api-access-jhvlv") pod "037989f7-21fc-4899-b869-b0ebfbe70cd6" (UID: "037989f7-21fc-4899-b869-b0ebfbe70cd6"). InnerVolumeSpecName "kube-api-access-jhvlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.354940 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:55 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:55 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:55 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.355004 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.393271 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:47:55 crc kubenswrapper[4988]: W1123 06:47:55.397124 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e5fbbbf_ab9f_494f_879f_b867959deb97.slice/crio-c7f5881da7c1e124c96610bd6566dd012515e0c5ff6b6059faff7c377660cfcc WatchSource:0}: Error finding container c7f5881da7c1e124c96610bd6566dd012515e0c5ff6b6059faff7c377660cfcc: Status 404 returned error can't find the container with id c7f5881da7c1e124c96610bd6566dd012515e0c5ff6b6059faff7c377660cfcc Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.443467 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhvlv\" (UniqueName: \"kubernetes.io/projected/037989f7-21fc-4899-b869-b0ebfbe70cd6-kube-api-access-jhvlv\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.443499 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/037989f7-21fc-4899-b869-b0ebfbe70cd6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.443509 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/037989f7-21fc-4899-b869-b0ebfbe70cd6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.855115 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" event={"ID":"037989f7-21fc-4899-b869-b0ebfbe70cd6","Type":"ContainerDied","Data":"33c6312775f402fad94edc5bef341b7d762dbb46ab72ad5c8222f04fae41a844"} Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.855163 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33c6312775f402fad94edc5bef341b7d762dbb46ab72ad5c8222f04fae41a844" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.855161 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp" Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.857275 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerID="31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81" exitCode=0 Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.857320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerDied","Data":"31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81"} Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.857351 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerStarted","Data":"c7f5881da7c1e124c96610bd6566dd012515e0c5ff6b6059faff7c377660cfcc"} Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.859238 4988 generic.go:334] "Generic (PLEG): container finished" podID="d204a0c1-c9fb-436c-84e1-458826c49395" containerID="9bda43c37153cea21f4e2341306f4c1ecc7fbc525c2278991c712ebc79ddd6cd" exitCode=0 Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.859295 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerDied","Data":"9bda43c37153cea21f4e2341306f4c1ecc7fbc525c2278991c712ebc79ddd6cd"} Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.859333 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerStarted","Data":"0b5b62598b69154e5008e9322e4ec61ebbd0de2ef50e9734a6fc69124d44f552"} Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.863576 4988 generic.go:334] "Generic (PLEG): container finished" podID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerID="44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2" exitCode=0 Nov 23 06:47:55 crc kubenswrapper[4988]: I1123 06:47:55.864424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerDied","Data":"44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2"} Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.112507 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.121296 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:47:56 crc kubenswrapper[4988]: E1123 06:47:56.121525 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037989f7-21fc-4899-b869-b0ebfbe70cd6" containerName="collect-profiles" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.121543 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="037989f7-21fc-4899-b869-b0ebfbe70cd6" containerName="collect-profiles" Nov 23 06:47:56 crc kubenswrapper[4988]: E1123 06:47:56.121563 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b6131da-5781-4dc5-89ec-b21c774bc3e4" containerName="pruner" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.121570 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b6131da-5781-4dc5-89ec-b21c774bc3e4" containerName="pruner" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.121678 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="037989f7-21fc-4899-b869-b0ebfbe70cd6" containerName="collect-profiles" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.121694 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b6131da-5781-4dc5-89ec-b21c774bc3e4" containerName="pruner" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.124377 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.127734 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.129069 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253419 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir\") pod \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253519 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access\") pod \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\" (UID: \"8b6131da-5781-4dc5-89ec-b21c774bc3e4\") " Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253551 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b6131da-5781-4dc5-89ec-b21c774bc3e4" (UID: "8b6131da-5781-4dc5-89ec-b21c774bc3e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253748 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253794 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.253899 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfq9g\" (UniqueName: \"kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.254007 4988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.268601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b6131da-5781-4dc5-89ec-b21c774bc3e4" (UID: "8b6131da-5781-4dc5-89ec-b21c774bc3e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.297947 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.297996 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.305605 4988 patch_prober.go:28] interesting pod/console-f9d7485db-gfkdg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.305661 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gfkdg" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.333062 4988 patch_prober.go:28] interesting pod/downloads-7954f5f757-6hs4l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.333125 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6hs4l" podUID="a42d143b-74b0-4ff4-b3c1-6bd59e656461" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.333255 4988 patch_prober.go:28] interesting pod/downloads-7954f5f757-6hs4l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.333353 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6hs4l" podUID="a42d143b-74b0-4ff4-b3c1-6bd59e656461" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.355435 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.355497 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.355550 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfq9g\" (UniqueName: \"kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.355590 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b6131da-5781-4dc5-89ec-b21c774bc3e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.356461 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.356722 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.358653 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:56 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:56 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:56 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.358720 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.383640 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfq9g\" (UniqueName: \"kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g\") pod \"redhat-marketplace-5z4ss\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.401223 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.402159 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.406889 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.407027 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.416097 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.440278 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.517051 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.518162 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.523830 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.570706 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.570860 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672073 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45vr\" (UniqueName: \"kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672222 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672243 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672266 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.672348 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.690868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.726361 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.773863 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.773924 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.773980 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45vr\" (UniqueName: \"kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.775141 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.775602 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.783073 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.797071 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45vr\" (UniqueName: \"kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr\") pod \"redhat-marketplace-x9rzv\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:56 crc kubenswrapper[4988]: W1123 06:47:56.804782 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d6d0eff_3d7a_4913_996a_d7db0261b1d7.slice/crio-66e7417146ffcb630f5be67573e14834b476ae5c8223e5f167bd07c32a7f9624 WatchSource:0}: Error finding container 66e7417146ffcb630f5be67573e14834b476ae5c8223e5f167bd07c32a7f9624: Status 404 returned error can't find the container with id 66e7417146ffcb630f5be67573e14834b476ae5c8223e5f167bd07c32a7f9624 Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.857935 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.873113 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerStarted","Data":"66e7417146ffcb630f5be67573e14834b476ae5c8223e5f167bd07c32a7f9624"} Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.876624 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b6131da-5781-4dc5-89ec-b21c774bc3e4","Type":"ContainerDied","Data":"eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d"} Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.876663 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff8846fb0c46b0d310a29bfd00b41f29adfb185305d79a364a68aabdd63ce8d" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.876754 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:47:56 crc kubenswrapper[4988]: I1123 06:47:56.888080 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.126052 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.127503 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.132195 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.140153 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.181510 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.280998 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.281147 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.281268 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5x5\" (UniqueName: \"kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.297995 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:47:57 crc kubenswrapper[4988]: W1123 06:47:57.309857 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6214e9a6_9472_42b7_be56_cd88296cc134.slice/crio-d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2 WatchSource:0}: Error finding container d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2: Status 404 returned error can't find the container with id d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2 Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.347596 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.351980 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:57 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:57 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:57 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.352099 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.382303 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.382448 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.382523 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5x5\" (UniqueName: \"kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.382994 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.383511 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.424887 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5x5\" (UniqueName: \"kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5\") pod \"redhat-operators-xw2hq\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.520389 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.521333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.544809 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.548848 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.690151 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.690324 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhqq\" (UniqueName: \"kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.690455 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.797055 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.797496 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.797576 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdhqq\" (UniqueName: \"kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.798691 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.798908 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.830980 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdhqq\" (UniqueName: \"kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq\") pod \"redhat-operators-l8bn4\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.846911 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.923582 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"274601f1-67ae-4d87-af93-385ddbeedf82","Type":"ContainerStarted","Data":"bb79961b6595aba506a246c1c5bef10110627af693d26c88305eeb413d408b65"} Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.923624 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"274601f1-67ae-4d87-af93-385ddbeedf82","Type":"ContainerStarted","Data":"c1b34e9a3031772c1c0bbc0d1677b554c9a44098a918eb502023769513175305"} Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.932713 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerID="ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e" exitCode=0 Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.932781 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerDied","Data":"ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e"} Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.943286 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:47:57 crc kubenswrapper[4988]: I1123 06:47:57.943491 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.94346894 podStartE2EDuration="1.94346894s" podCreationTimestamp="2025-11-23 06:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:47:57.939180804 +0000 UTC m=+130.247693567" watchObservedRunningTime="2025-11-23 06:47:57.94346894 +0000 UTC m=+130.251981703" Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.008534 4988 generic.go:334] "Generic (PLEG): container finished" podID="6214e9a6-9472-42b7-be56-cd88296cc134" containerID="ee7ad72b0fed76027b04444715f8404d5264f1557511bd9b8f5de91c272b3e81" exitCode=0 Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.008574 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerDied","Data":"ee7ad72b0fed76027b04444715f8404d5264f1557511bd9b8f5de91c272b3e81"} Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.008597 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerStarted","Data":"d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2"} Nov 23 06:47:58 crc kubenswrapper[4988]: W1123 06:47:58.021418 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9569d22d_764e_44fd_a6ff_6266c766b304.slice/crio-43b4901dd65b24246f39df699f92e7a21529d876107ab8f4bcf3ddfd4584e090 WatchSource:0}: Error finding container 43b4901dd65b24246f39df699f92e7a21529d876107ab8f4bcf3ddfd4584e090: Status 404 returned error can't find the container with id 43b4901dd65b24246f39df699f92e7a21529d876107ab8f4bcf3ddfd4584e090 Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.352707 4988 patch_prober.go:28] interesting pod/router-default-5444994796-qm85c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:47:58 crc kubenswrapper[4988]: [-]has-synced failed: reason withheld Nov 23 06:47:58 crc kubenswrapper[4988]: [+]process-running ok Nov 23 06:47:58 crc kubenswrapper[4988]: healthz check failed Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.353249 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qm85c" podUID="792f32a3-6f0a-4808-8e0d-17e8224d5ae4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:47:58 crc kubenswrapper[4988]: I1123 06:47:58.549343 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:47:58 crc kubenswrapper[4988]: W1123 06:47:58.561954 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded93687f_5cdf_4367_9cff_93b404983ba1.slice/crio-d83c69c2a3e2d104885c5cef1224cbe12d83610315075c1b737cedc9b2f1092f WatchSource:0}: Error finding container d83c69c2a3e2d104885c5cef1224cbe12d83610315075c1b737cedc9b2f1092f: Status 404 returned error can't find the container with id d83c69c2a3e2d104885c5cef1224cbe12d83610315075c1b737cedc9b2f1092f Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.029980 4988 generic.go:334] "Generic (PLEG): container finished" podID="9569d22d-764e-44fd-a6ff-6266c766b304" containerID="36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d" exitCode=0 Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.030810 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerDied","Data":"36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d"} Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.030834 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerStarted","Data":"43b4901dd65b24246f39df699f92e7a21529d876107ab8f4bcf3ddfd4584e090"} Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.035832 4988 generic.go:334] "Generic (PLEG): container finished" podID="274601f1-67ae-4d87-af93-385ddbeedf82" containerID="bb79961b6595aba506a246c1c5bef10110627af693d26c88305eeb413d408b65" exitCode=0 Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.035918 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"274601f1-67ae-4d87-af93-385ddbeedf82","Type":"ContainerDied","Data":"bb79961b6595aba506a246c1c5bef10110627af693d26c88305eeb413d408b65"} Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.040457 4988 generic.go:334] "Generic (PLEG): container finished" podID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerID="1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0" exitCode=0 Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.040496 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerDied","Data":"1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0"} Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.040519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerStarted","Data":"d83c69c2a3e2d104885c5cef1224cbe12d83610315075c1b737cedc9b2f1092f"} Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.354451 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:47:59 crc kubenswrapper[4988]: I1123 06:47:59.357028 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qm85c" Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.380513 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.567648 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir\") pod \"274601f1-67ae-4d87-af93-385ddbeedf82\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.567847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access\") pod \"274601f1-67ae-4d87-af93-385ddbeedf82\" (UID: \"274601f1-67ae-4d87-af93-385ddbeedf82\") " Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.567876 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "274601f1-67ae-4d87-af93-385ddbeedf82" (UID: "274601f1-67ae-4d87-af93-385ddbeedf82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.568227 4988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/274601f1-67ae-4d87-af93-385ddbeedf82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.577496 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "274601f1-67ae-4d87-af93-385ddbeedf82" (UID: "274601f1-67ae-4d87-af93-385ddbeedf82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:48:00 crc kubenswrapper[4988]: I1123 06:48:00.668975 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/274601f1-67ae-4d87-af93-385ddbeedf82-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:48:01 crc kubenswrapper[4988]: I1123 06:48:01.059373 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"274601f1-67ae-4d87-af93-385ddbeedf82","Type":"ContainerDied","Data":"c1b34e9a3031772c1c0bbc0d1677b554c9a44098a918eb502023769513175305"} Nov 23 06:48:01 crc kubenswrapper[4988]: I1123 06:48:01.059420 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1b34e9a3031772c1c0bbc0d1677b554c9a44098a918eb502023769513175305" Nov 23 06:48:01 crc kubenswrapper[4988]: I1123 06:48:01.059491 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:48:02 crc kubenswrapper[4988]: I1123 06:48:02.242204 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dzkgh" Nov 23 06:48:06 crc kubenswrapper[4988]: I1123 06:48:06.308168 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:48:06 crc kubenswrapper[4988]: I1123 06:48:06.313292 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:48:06 crc kubenswrapper[4988]: I1123 06:48:06.336891 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6hs4l" Nov 23 06:48:13 crc kubenswrapper[4988]: I1123 06:48:13.906563 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.451063 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.452039 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.452097 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.452259 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.454812 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.454841 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.455221 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.464975 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.469125 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.477016 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.478128 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.635655 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.669733 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.679111 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.720014 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.856991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.859096 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 06:48:16 crc kubenswrapper[4988]: I1123 06:48:16.873222 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a94eb06-d03a-43c9-8004-73d48280435f-metrics-certs\") pod \"network-metrics-daemon-l5wgs\" (UID: \"1a94eb06-d03a-43c9-8004-73d48280435f\") " pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:48:17 crc kubenswrapper[4988]: I1123 06:48:17.010698 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 06:48:17 crc kubenswrapper[4988]: I1123 06:48:17.018795 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l5wgs" Nov 23 06:48:21 crc kubenswrapper[4988]: I1123 06:48:21.672921 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:48:21 crc kubenswrapper[4988]: I1123 06:48:21.673415 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:48:23 crc kubenswrapper[4988]: E1123 06:48:23.791443 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 06:48:23 crc kubenswrapper[4988]: E1123 06:48:23.791756 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfq9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5z4ss_openshift-marketplace(1d6d0eff-3d7a-4913-996a-d7db0261b1d7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:23 crc kubenswrapper[4988]: E1123 06:48:23.793372 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5z4ss" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" Nov 23 06:48:27 crc kubenswrapper[4988]: I1123 06:48:27.100662 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kmq57" Nov 23 06:48:33 crc kubenswrapper[4988]: E1123 06:48:33.337002 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5z4ss" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" Nov 23 06:48:38 crc kubenswrapper[4988]: E1123 06:48:38.240766 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 06:48:38 crc kubenswrapper[4988]: E1123 06:48:38.240924 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdhqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l8bn4_openshift-marketplace(ed93687f-5cdf-4367-9cff-93b404983ba1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:38 crc kubenswrapper[4988]: E1123 06:48:38.242150 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l8bn4" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" Nov 23 06:48:43 crc kubenswrapper[4988]: E1123 06:48:43.898682 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l8bn4" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" Nov 23 06:48:46 crc kubenswrapper[4988]: E1123 06:48:46.976701 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 06:48:46 crc kubenswrapper[4988]: E1123 06:48:46.977352 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfdmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gcsbm_openshift-marketplace(fc4d3d52-3334-454d-8ea9-3acc065a17b3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:46 crc kubenswrapper[4988]: E1123 06:48:46.978609 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gcsbm" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" Nov 23 06:48:51 crc kubenswrapper[4988]: E1123 06:48:51.132704 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gcsbm" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" Nov 23 06:48:51 crc kubenswrapper[4988]: I1123 06:48:51.672375 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:48:51 crc kubenswrapper[4988]: I1123 06:48:51.672968 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.166860 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l5wgs"] Nov 23 06:48:52 crc kubenswrapper[4988]: W1123 06:48:52.172929 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a94eb06_d03a_43c9_8004_73d48280435f.slice/crio-ea7930daa2637c29f343a5d55b9868eb33ccad418833dd33cb1fc4f08800a762 WatchSource:0}: Error finding container ea7930daa2637c29f343a5d55b9868eb33ccad418833dd33cb1fc4f08800a762: Status 404 returned error can't find the container with id ea7930daa2637c29f343a5d55b9868eb33ccad418833dd33cb1fc4f08800a762 Nov 23 06:48:52 crc kubenswrapper[4988]: W1123 06:48:52.368712 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1cc3766d7c2d7e9194c4573bca6bac6473e10304f38ff6fa7fb51f31302f96d4 WatchSource:0}: Error finding container 1cc3766d7c2d7e9194c4573bca6bac6473e10304f38ff6fa7fb51f31302f96d4: Status 404 returned error can't find the container with id 1cc3766d7c2d7e9194c4573bca6bac6473e10304f38ff6fa7fb51f31302f96d4 Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.386643 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.386799 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpxrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4ft6r_openshift-marketplace(2e5fbbbf-ab9f-494f-879f-b867959deb97): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.387917 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4ft6r" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.428430 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"477de71cbc20188e0d841cb63144fb953f990afb5b64ad769119d0239e68e431"} Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.431718 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerStarted","Data":"22c99e2b573eee8d2bc5187d34a5c91789209b147c75316bbcfe653e2d72f483"} Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.433752 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1cc3766d7c2d7e9194c4573bca6bac6473e10304f38ff6fa7fb51f31302f96d4"} Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.435779 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" event={"ID":"1a94eb06-d03a-43c9-8004-73d48280435f","Type":"ContainerStarted","Data":"ea7930daa2637c29f343a5d55b9868eb33ccad418833dd33cb1fc4f08800a762"} Nov 23 06:48:52 crc kubenswrapper[4988]: I1123 06:48:52.438158 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"192f61a81b7c16c785847ac7064778826aee26eb58492b6ccd892495d66fa982"} Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.467679 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4ft6r" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.937588 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.938119 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb5x5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xw2hq_openshift-marketplace(9569d22d-764e-44fd-a6ff-6266c766b304): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:52 crc kubenswrapper[4988]: E1123 06:48:52.939283 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xw2hq" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" Nov 23 06:48:53 crc kubenswrapper[4988]: E1123 06:48:53.259294 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 06:48:53 crc kubenswrapper[4988]: E1123 06:48:53.259488 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hgg5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5gwgn_openshift-marketplace(20a119bf-0e89-4c4b-8502-bf5d7759a95d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:53 crc kubenswrapper[4988]: E1123 06:48:53.260690 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5gwgn" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.445065 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4df9a6b3d7eee9150bc504470a3f7ff49e9d32a2627aaf13c7f470f0133691d6"} Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.448062 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"165a4a11b42435283a8472c2dcb5508faca70aaea9ae2d1b52cd45211d67622f"} Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.449946 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" event={"ID":"1a94eb06-d03a-43c9-8004-73d48280435f","Type":"ContainerStarted","Data":"d34ffbdfa504cf2a1a630a6557a900cf3e8fb911421be84f5c996faa60460edb"} Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.450300 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l5wgs" event={"ID":"1a94eb06-d03a-43c9-8004-73d48280435f","Type":"ContainerStarted","Data":"3278eadb66770e077f871a089384caefaa1dd32df27138667dc699f390b515fd"} Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.464229 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0b28cf378aa04324c7ee779ab7398f96b91a6745b03394d909d591a361633915"} Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.465088 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.469225 4988 generic.go:334] "Generic (PLEG): container finished" podID="6214e9a6-9472-42b7-be56-cd88296cc134" containerID="22c99e2b573eee8d2bc5187d34a5c91789209b147c75316bbcfe653e2d72f483" exitCode=0 Nov 23 06:48:53 crc kubenswrapper[4988]: I1123 06:48:53.470947 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerDied","Data":"22c99e2b573eee8d2bc5187d34a5c91789209b147c75316bbcfe653e2d72f483"} Nov 23 06:48:53 crc kubenswrapper[4988]: E1123 06:48:53.471306 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5gwgn" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" Nov 23 06:48:53 crc kubenswrapper[4988]: E1123 06:48:53.472896 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xw2hq" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" Nov 23 06:48:54 crc kubenswrapper[4988]: I1123 06:48:54.524879 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l5wgs" podStartSLOduration=166.524850023 podStartE2EDuration="2m46.524850023s" podCreationTimestamp="2025-11-23 06:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:48:54.51378596 +0000 UTC m=+186.822298733" watchObservedRunningTime="2025-11-23 06:48:54.524850023 +0000 UTC m=+186.833362826" Nov 23 06:48:55 crc kubenswrapper[4988]: E1123 06:48:55.560569 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 23 06:48:55 crc kubenswrapper[4988]: E1123 06:48:55.560805 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhhpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-phks7_openshift-marketplace(d204a0c1-c9fb-436c-84e1-458826c49395): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:48:55 crc kubenswrapper[4988]: E1123 06:48:55.562100 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-phks7" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" Nov 23 06:48:56 crc kubenswrapper[4988]: I1123 06:48:56.506453 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerID="7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007" exitCode=0 Nov 23 06:48:56 crc kubenswrapper[4988]: I1123 06:48:56.506509 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerDied","Data":"7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007"} Nov 23 06:48:56 crc kubenswrapper[4988]: E1123 06:48:56.566512 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-phks7" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" Nov 23 06:48:57 crc kubenswrapper[4988]: I1123 06:48:57.523557 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerStarted","Data":"20367d5cac99723075ac1bf82e9f6c677cc5a0ed24dad6f0b43edbc95eef4f10"} Nov 23 06:48:57 crc kubenswrapper[4988]: I1123 06:48:57.593150 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x9rzv" podStartSLOduration=3.037760833 podStartE2EDuration="1m1.593125272s" podCreationTimestamp="2025-11-23 06:47:56 +0000 UTC" firstStartedPulling="2025-11-23 06:47:58.012573818 +0000 UTC m=+130.321086581" lastFinishedPulling="2025-11-23 06:48:56.567938257 +0000 UTC m=+188.876451020" observedRunningTime="2025-11-23 06:48:57.554695887 +0000 UTC m=+189.863208690" watchObservedRunningTime="2025-11-23 06:48:57.593125272 +0000 UTC m=+189.901638045" Nov 23 06:48:59 crc kubenswrapper[4988]: I1123 06:48:59.540289 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerStarted","Data":"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a"} Nov 23 06:48:59 crc kubenswrapper[4988]: I1123 06:48:59.571287 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5z4ss" podStartSLOduration=2.630180563 podStartE2EDuration="1m3.571253739s" podCreationTimestamp="2025-11-23 06:47:56 +0000 UTC" firstStartedPulling="2025-11-23 06:47:58.019617322 +0000 UTC m=+130.328130085" lastFinishedPulling="2025-11-23 06:48:58.960690488 +0000 UTC m=+191.269203261" observedRunningTime="2025-11-23 06:48:59.57087937 +0000 UTC m=+191.879392173" watchObservedRunningTime="2025-11-23 06:48:59.571253739 +0000 UTC m=+191.879766502" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.442135 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.443133 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.889035 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.889688 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.959522 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:49:06 crc kubenswrapper[4988]: I1123 06:49:06.960412 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:07 crc kubenswrapper[4988]: I1123 06:49:07.028813 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:49:07 crc kubenswrapper[4988]: I1123 06:49:07.609828 4988 generic.go:334] "Generic (PLEG): container finished" podID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerID="a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153" exitCode=0 Nov 23 06:49:07 crc kubenswrapper[4988]: I1123 06:49:07.611404 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerDied","Data":"a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153"} Nov 23 06:49:07 crc kubenswrapper[4988]: I1123 06:49:07.665054 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:09 crc kubenswrapper[4988]: I1123 06:49:09.950186 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:49:09 crc kubenswrapper[4988]: I1123 06:49:09.951342 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x9rzv" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="registry-server" containerID="cri-o://20367d5cac99723075ac1bf82e9f6c677cc5a0ed24dad6f0b43edbc95eef4f10" gracePeriod=2 Nov 23 06:49:10 crc kubenswrapper[4988]: I1123 06:49:10.634126 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerStarted","Data":"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8"} Nov 23 06:49:10 crc kubenswrapper[4988]: I1123 06:49:10.636649 4988 generic.go:334] "Generic (PLEG): container finished" podID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerID="762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d" exitCode=0 Nov 23 06:49:10 crc kubenswrapper[4988]: I1123 06:49:10.636755 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerDied","Data":"762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d"} Nov 23 06:49:10 crc kubenswrapper[4988]: I1123 06:49:10.639034 4988 generic.go:334] "Generic (PLEG): container finished" podID="6214e9a6-9472-42b7-be56-cd88296cc134" containerID="20367d5cac99723075ac1bf82e9f6c677cc5a0ed24dad6f0b43edbc95eef4f10" exitCode=0 Nov 23 06:49:10 crc kubenswrapper[4988]: I1123 06:49:10.639073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerDied","Data":"20367d5cac99723075ac1bf82e9f6c677cc5a0ed24dad6f0b43edbc95eef4f10"} Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.646941 4988 generic.go:334] "Generic (PLEG): container finished" podID="9569d22d-764e-44fd-a6ff-6266c766b304" containerID="23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8" exitCode=0 Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.647023 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerDied","Data":"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8"} Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.651962 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x9rzv" event={"ID":"6214e9a6-9472-42b7-be56-cd88296cc134","Type":"ContainerDied","Data":"d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2"} Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.651999 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1bb5015cac64579f9f06db189d3079101c550ce176e2659122500936a0494f2" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.675599 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.877261 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45vr\" (UniqueName: \"kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr\") pod \"6214e9a6-9472-42b7-be56-cd88296cc134\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.877364 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content\") pod \"6214e9a6-9472-42b7-be56-cd88296cc134\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.877404 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities\") pod \"6214e9a6-9472-42b7-be56-cd88296cc134\" (UID: \"6214e9a6-9472-42b7-be56-cd88296cc134\") " Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.879445 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities" (OuterVolumeSpecName: "utilities") pod "6214e9a6-9472-42b7-be56-cd88296cc134" (UID: "6214e9a6-9472-42b7-be56-cd88296cc134"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.892101 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr" (OuterVolumeSpecName: "kube-api-access-b45vr") pod "6214e9a6-9472-42b7-be56-cd88296cc134" (UID: "6214e9a6-9472-42b7-be56-cd88296cc134"). InnerVolumeSpecName "kube-api-access-b45vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.916730 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6214e9a6-9472-42b7-be56-cd88296cc134" (UID: "6214e9a6-9472-42b7-be56-cd88296cc134"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.979853 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45vr\" (UniqueName: \"kubernetes.io/projected/6214e9a6-9472-42b7-be56-cd88296cc134-kube-api-access-b45vr\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.979929 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:11 crc kubenswrapper[4988]: I1123 06:49:11.979955 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6214e9a6-9472-42b7-be56-cd88296cc134-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:12 crc kubenswrapper[4988]: I1123 06:49:12.658535 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x9rzv" Nov 23 06:49:12 crc kubenswrapper[4988]: I1123 06:49:12.690347 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:49:12 crc kubenswrapper[4988]: I1123 06:49:12.698185 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x9rzv"] Nov 23 06:49:14 crc kubenswrapper[4988]: I1123 06:49:14.516693 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" path="/var/lib/kubelet/pods/6214e9a6-9472-42b7-be56-cd88296cc134/volumes" Nov 23 06:49:17 crc kubenswrapper[4988]: I1123 06:49:17.689995 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerStarted","Data":"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125"} Nov 23 06:49:18 crc kubenswrapper[4988]: I1123 06:49:18.731334 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l8bn4" podStartSLOduration=7.868926343 podStartE2EDuration="1m21.731306078s" podCreationTimestamp="2025-11-23 06:47:57 +0000 UTC" firstStartedPulling="2025-11-23 06:47:59.042209183 +0000 UTC m=+131.350721946" lastFinishedPulling="2025-11-23 06:49:12.904588898 +0000 UTC m=+205.213101681" observedRunningTime="2025-11-23 06:49:18.729968974 +0000 UTC m=+211.038481747" watchObservedRunningTime="2025-11-23 06:49:18.731306078 +0000 UTC m=+211.039818891" Nov 23 06:49:21 crc kubenswrapper[4988]: I1123 06:49:21.672582 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:49:21 crc kubenswrapper[4988]: I1123 06:49:21.673374 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:49:21 crc kubenswrapper[4988]: I1123 06:49:21.673451 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:49:21 crc kubenswrapper[4988]: I1123 06:49:21.674108 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:49:21 crc kubenswrapper[4988]: I1123 06:49:21.674238 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0" gracePeriod=600 Nov 23 06:49:22 crc kubenswrapper[4988]: I1123 06:49:22.731255 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerStarted","Data":"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.738535 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0" exitCode=0 Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.738727 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.741882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerStarted","Data":"b643788b5c3c782f5270b8c7a85fa74d209a14134d478a391bbeacc361eb8f96"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.743606 4988 generic.go:334] "Generic (PLEG): container finished" podID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerID="0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d" exitCode=0 Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.743678 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerDied","Data":"0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.746016 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerStarted","Data":"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.748048 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerID="a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0" exitCode=0 Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.748109 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerDied","Data":"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.752276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerStarted","Data":"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9"} Nov 23 06:49:23 crc kubenswrapper[4988]: I1123 06:49:23.790909 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gcsbm" podStartSLOduration=4.830427801 podStartE2EDuration="1m30.790888682s" podCreationTimestamp="2025-11-23 06:47:53 +0000 UTC" firstStartedPulling="2025-11-23 06:47:54.857760645 +0000 UTC m=+127.166273408" lastFinishedPulling="2025-11-23 06:49:20.818221526 +0000 UTC m=+213.126734289" observedRunningTime="2025-11-23 06:49:23.786858299 +0000 UTC m=+216.095371062" watchObservedRunningTime="2025-11-23 06:49:23.790888682 +0000 UTC m=+216.099401445" Nov 23 06:49:24 crc kubenswrapper[4988]: I1123 06:49:24.268887 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:49:24 crc kubenswrapper[4988]: I1123 06:49:24.268997 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:49:24 crc kubenswrapper[4988]: I1123 06:49:24.759279 4988 generic.go:334] "Generic (PLEG): container finished" podID="d204a0c1-c9fb-436c-84e1-458826c49395" containerID="b643788b5c3c782f5270b8c7a85fa74d209a14134d478a391bbeacc361eb8f96" exitCode=0 Nov 23 06:49:24 crc kubenswrapper[4988]: I1123 06:49:24.759407 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerDied","Data":"b643788b5c3c782f5270b8c7a85fa74d209a14134d478a391bbeacc361eb8f96"} Nov 23 06:49:24 crc kubenswrapper[4988]: I1123 06:49:24.819365 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xw2hq" podStartSLOduration=6.032046838 podStartE2EDuration="1m27.819344832s" podCreationTimestamp="2025-11-23 06:47:57 +0000 UTC" firstStartedPulling="2025-11-23 06:47:59.032159624 +0000 UTC m=+131.340672387" lastFinishedPulling="2025-11-23 06:49:20.819457608 +0000 UTC m=+213.127970381" observedRunningTime="2025-11-23 06:49:24.814436436 +0000 UTC m=+217.122949209" watchObservedRunningTime="2025-11-23 06:49:24.819344832 +0000 UTC m=+217.127857595" Nov 23 06:49:25 crc kubenswrapper[4988]: I1123 06:49:25.315824 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gcsbm" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="registry-server" probeResult="failure" output=< Nov 23 06:49:25 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 06:49:25 crc kubenswrapper[4988]: > Nov 23 06:49:26 crc kubenswrapper[4988]: I1123 06:49:26.705540 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.545732 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.546285 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.778867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b"} Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.847913 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.848038 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:27 crc kubenswrapper[4988]: I1123 06:49:27.897024 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:28 crc kubenswrapper[4988]: I1123 06:49:28.620662 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xw2hq" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="registry-server" probeResult="failure" output=< Nov 23 06:49:28 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 06:49:28 crc kubenswrapper[4988]: > Nov 23 06:49:28 crc kubenswrapper[4988]: I1123 06:49:28.850976 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:30 crc kubenswrapper[4988]: I1123 06:49:30.594958 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:49:30 crc kubenswrapper[4988]: I1123 06:49:30.884144 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerStarted","Data":"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197"} Nov 23 06:49:30 crc kubenswrapper[4988]: I1123 06:49:30.884684 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l8bn4" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="registry-server" containerID="cri-o://a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125" gracePeriod=2 Nov 23 06:49:30 crc kubenswrapper[4988]: I1123 06:49:30.916208 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4ft6r" podStartSLOduration=4.311235484 podStartE2EDuration="1m36.91617235s" podCreationTimestamp="2025-11-23 06:47:54 +0000 UTC" firstStartedPulling="2025-11-23 06:47:55.859951961 +0000 UTC m=+128.168464724" lastFinishedPulling="2025-11-23 06:49:28.464888787 +0000 UTC m=+220.773401590" observedRunningTime="2025-11-23 06:49:30.912573668 +0000 UTC m=+223.221086431" watchObservedRunningTime="2025-11-23 06:49:30.91617235 +0000 UTC m=+223.224685113" Nov 23 06:49:31 crc kubenswrapper[4988]: E1123 06:49:31.058870 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded93687f_5cdf_4367_9cff_93b404983ba1.slice/crio-a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125.scope\": RecentStats: unable to find data in memory cache]" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.782121 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.883748 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content\") pod \"ed93687f-5cdf-4367-9cff-93b404983ba1\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.883831 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdhqq\" (UniqueName: \"kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq\") pod \"ed93687f-5cdf-4367-9cff-93b404983ba1\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.883864 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities\") pod \"ed93687f-5cdf-4367-9cff-93b404983ba1\" (UID: \"ed93687f-5cdf-4367-9cff-93b404983ba1\") " Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.884921 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities" (OuterVolumeSpecName: "utilities") pod "ed93687f-5cdf-4367-9cff-93b404983ba1" (UID: "ed93687f-5cdf-4367-9cff-93b404983ba1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.891562 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerStarted","Data":"d1d6ee729472ea2d3279f02ff125d66f001015245cbcec666717b7cdf8b43de3"} Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.893367 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq" (OuterVolumeSpecName: "kube-api-access-mdhqq") pod "ed93687f-5cdf-4367-9cff-93b404983ba1" (UID: "ed93687f-5cdf-4367-9cff-93b404983ba1"). InnerVolumeSpecName "kube-api-access-mdhqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.895116 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerStarted","Data":"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f"} Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.897657 4988 generic.go:334] "Generic (PLEG): container finished" podID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerID="a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125" exitCode=0 Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.898140 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l8bn4" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.898277 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerDied","Data":"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125"} Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.898306 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l8bn4" event={"ID":"ed93687f-5cdf-4367-9cff-93b404983ba1","Type":"ContainerDied","Data":"d83c69c2a3e2d104885c5cef1224cbe12d83610315075c1b737cedc9b2f1092f"} Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.898323 4988 scope.go:117] "RemoveContainer" containerID="a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.918137 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phks7" podStartSLOduration=2.66698733 podStartE2EDuration="1m37.918120569s" podCreationTimestamp="2025-11-23 06:47:54 +0000 UTC" firstStartedPulling="2025-11-23 06:47:55.860661119 +0000 UTC m=+128.169173882" lastFinishedPulling="2025-11-23 06:49:31.111794358 +0000 UTC m=+223.420307121" observedRunningTime="2025-11-23 06:49:31.915365699 +0000 UTC m=+224.223878472" watchObservedRunningTime="2025-11-23 06:49:31.918120569 +0000 UTC m=+224.226633342" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.922874 4988 scope.go:117] "RemoveContainer" containerID="a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.949923 4988 scope.go:117] "RemoveContainer" containerID="1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.967177 4988 scope.go:117] "RemoveContainer" containerID="a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125" Nov 23 06:49:31 crc kubenswrapper[4988]: E1123 06:49:31.967729 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125\": container with ID starting with a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125 not found: ID does not exist" containerID="a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.967769 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125"} err="failed to get container status \"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125\": rpc error: code = NotFound desc = could not find container \"a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125\": container with ID starting with a70f5e02ea6bb247c9dc57b3c46f5b3f9cc98b171c4bf5904f713c72d2637125 not found: ID does not exist" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.967792 4988 scope.go:117] "RemoveContainer" containerID="a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153" Nov 23 06:49:31 crc kubenswrapper[4988]: E1123 06:49:31.968103 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153\": container with ID starting with a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153 not found: ID does not exist" containerID="a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.968135 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153"} err="failed to get container status \"a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153\": rpc error: code = NotFound desc = could not find container \"a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153\": container with ID starting with a1540463d3ac972b4a6a24b47e58feda4ab3a6e52fdc178d3fb0f51f10959153 not found: ID does not exist" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.968156 4988 scope.go:117] "RemoveContainer" containerID="1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0" Nov 23 06:49:31 crc kubenswrapper[4988]: E1123 06:49:31.968411 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0\": container with ID starting with 1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0 not found: ID does not exist" containerID="1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.968434 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0"} err="failed to get container status \"1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0\": rpc error: code = NotFound desc = could not find container \"1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0\": container with ID starting with 1cd82eef05bf9e073952e23d888dbc2ed9d3c0df19d8e574ad790d9ea5f9ecd0 not found: ID does not exist" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.980871 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed93687f-5cdf-4367-9cff-93b404983ba1" (UID: "ed93687f-5cdf-4367-9cff-93b404983ba1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.985350 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.985393 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdhqq\" (UniqueName: \"kubernetes.io/projected/ed93687f-5cdf-4367-9cff-93b404983ba1-kube-api-access-mdhqq\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:31 crc kubenswrapper[4988]: I1123 06:49:31.985414 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed93687f-5cdf-4367-9cff-93b404983ba1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:32 crc kubenswrapper[4988]: I1123 06:49:32.239821 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5gwgn" podStartSLOduration=2.573479958 podStartE2EDuration="1m38.23979674s" podCreationTimestamp="2025-11-23 06:47:54 +0000 UTC" firstStartedPulling="2025-11-23 06:47:55.865466488 +0000 UTC m=+128.173979241" lastFinishedPulling="2025-11-23 06:49:31.53178325 +0000 UTC m=+223.840296023" observedRunningTime="2025-11-23 06:49:31.940154084 +0000 UTC m=+224.248666857" watchObservedRunningTime="2025-11-23 06:49:32.23979674 +0000 UTC m=+224.548309503" Nov 23 06:49:32 crc kubenswrapper[4988]: I1123 06:49:32.241150 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:49:32 crc kubenswrapper[4988]: I1123 06:49:32.245846 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l8bn4"] Nov 23 06:49:32 crc kubenswrapper[4988]: I1123 06:49:32.507253 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" path="/var/lib/kubelet/pods/ed93687f-5cdf-4367-9cff-93b404983ba1/volumes" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.343030 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.393995 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.448943 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.448994 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.507711 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.666895 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.666968 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.714023 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.832055 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.832135 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:34 crc kubenswrapper[4988]: I1123 06:49:34.902585 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:37 crc kubenswrapper[4988]: I1123 06:49:37.609448 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:49:37 crc kubenswrapper[4988]: I1123 06:49:37.656166 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:49:44 crc kubenswrapper[4988]: I1123 06:49:44.494815 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:49:44 crc kubenswrapper[4988]: I1123 06:49:44.712825 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:44 crc kubenswrapper[4988]: I1123 06:49:44.888821 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:46 crc kubenswrapper[4988]: I1123 06:49:46.524687 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:49:46 crc kubenswrapper[4988]: I1123 06:49:46.524911 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-phks7" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="registry-server" containerID="cri-o://d1d6ee729472ea2d3279f02ff125d66f001015245cbcec666717b7cdf8b43de3" gracePeriod=2 Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.000697 4988 generic.go:334] "Generic (PLEG): container finished" podID="d204a0c1-c9fb-436c-84e1-458826c49395" containerID="d1d6ee729472ea2d3279f02ff125d66f001015245cbcec666717b7cdf8b43de3" exitCode=0 Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.000742 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerDied","Data":"d1d6ee729472ea2d3279f02ff125d66f001015245cbcec666717b7cdf8b43de3"} Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.125961 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.126333 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4ft6r" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="registry-server" containerID="cri-o://fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197" gracePeriod=2 Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.411004 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.512960 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598557 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities\") pod \"d204a0c1-c9fb-436c-84e1-458826c49395\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598630 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content\") pod \"d204a0c1-c9fb-436c-84e1-458826c49395\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598692 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content\") pod \"2e5fbbbf-ab9f-494f-879f-b867959deb97\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598749 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhpz\" (UniqueName: \"kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz\") pod \"d204a0c1-c9fb-436c-84e1-458826c49395\" (UID: \"d204a0c1-c9fb-436c-84e1-458826c49395\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598764 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities\") pod \"2e5fbbbf-ab9f-494f-879f-b867959deb97\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.598794 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxrk\" (UniqueName: \"kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk\") pod \"2e5fbbbf-ab9f-494f-879f-b867959deb97\" (UID: \"2e5fbbbf-ab9f-494f-879f-b867959deb97\") " Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.600078 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities" (OuterVolumeSpecName: "utilities") pod "d204a0c1-c9fb-436c-84e1-458826c49395" (UID: "d204a0c1-c9fb-436c-84e1-458826c49395"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.603902 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities" (OuterVolumeSpecName: "utilities") pod "2e5fbbbf-ab9f-494f-879f-b867959deb97" (UID: "2e5fbbbf-ab9f-494f-879f-b867959deb97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.605893 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk" (OuterVolumeSpecName: "kube-api-access-jpxrk") pod "2e5fbbbf-ab9f-494f-879f-b867959deb97" (UID: "2e5fbbbf-ab9f-494f-879f-b867959deb97"). InnerVolumeSpecName "kube-api-access-jpxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.606996 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz" (OuterVolumeSpecName: "kube-api-access-bhhpz") pod "d204a0c1-c9fb-436c-84e1-458826c49395" (UID: "d204a0c1-c9fb-436c-84e1-458826c49395"). InnerVolumeSpecName "kube-api-access-bhhpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.665172 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d204a0c1-c9fb-436c-84e1-458826c49395" (UID: "d204a0c1-c9fb-436c-84e1-458826c49395"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.665695 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e5fbbbf-ab9f-494f-879f-b867959deb97" (UID: "2e5fbbbf-ab9f-494f-879f-b867959deb97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700377 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxrk\" (UniqueName: \"kubernetes.io/projected/2e5fbbbf-ab9f-494f-879f-b867959deb97-kube-api-access-jpxrk\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700417 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700429 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d204a0c1-c9fb-436c-84e1-458826c49395-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700438 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700447 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhpz\" (UniqueName: \"kubernetes.io/projected/d204a0c1-c9fb-436c-84e1-458826c49395-kube-api-access-bhhpz\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:47 crc kubenswrapper[4988]: I1123 06:49:47.700455 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e5fbbbf-ab9f-494f-879f-b867959deb97-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.007677 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phks7" event={"ID":"d204a0c1-c9fb-436c-84e1-458826c49395","Type":"ContainerDied","Data":"0b5b62598b69154e5008e9322e4ec61ebbd0de2ef50e9734a6fc69124d44f552"} Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.007728 4988 scope.go:117] "RemoveContainer" containerID="d1d6ee729472ea2d3279f02ff125d66f001015245cbcec666717b7cdf8b43de3" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.007788 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phks7" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.010902 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerID="fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197" exitCode=0 Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.010949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerDied","Data":"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197"} Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.010977 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4ft6r" event={"ID":"2e5fbbbf-ab9f-494f-879f-b867959deb97","Type":"ContainerDied","Data":"c7f5881da7c1e124c96610bd6566dd012515e0c5ff6b6059faff7c377660cfcc"} Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.010998 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4ft6r" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.026078 4988 scope.go:117] "RemoveContainer" containerID="b643788b5c3c782f5270b8c7a85fa74d209a14134d478a391bbeacc361eb8f96" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.048505 4988 scope.go:117] "RemoveContainer" containerID="9bda43c37153cea21f4e2341306f4c1ecc7fbc525c2278991c712ebc79ddd6cd" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.052909 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.060228 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-phks7"] Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.064323 4988 scope.go:117] "RemoveContainer" containerID="fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.071931 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.075372 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4ft6r"] Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.084774 4988 scope.go:117] "RemoveContainer" containerID="a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.106474 4988 scope.go:117] "RemoveContainer" containerID="31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.123717 4988 scope.go:117] "RemoveContainer" containerID="fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197" Nov 23 06:49:48 crc kubenswrapper[4988]: E1123 06:49:48.124521 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197\": container with ID starting with fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197 not found: ID does not exist" containerID="fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.124563 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197"} err="failed to get container status \"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197\": rpc error: code = NotFound desc = could not find container \"fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197\": container with ID starting with fae4d82d80c7f92d2368f95ea757160dd758a1921fddbb28cfa32c902b6b5197 not found: ID does not exist" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.124612 4988 scope.go:117] "RemoveContainer" containerID="a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0" Nov 23 06:49:48 crc kubenswrapper[4988]: E1123 06:49:48.125120 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0\": container with ID starting with a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0 not found: ID does not exist" containerID="a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.125152 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0"} err="failed to get container status \"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0\": rpc error: code = NotFound desc = could not find container \"a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0\": container with ID starting with a878bc650ac7bb35c2ddea5a24765a5faf7a93ea68f589f18adb88fee7d6d2e0 not found: ID does not exist" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.125166 4988 scope.go:117] "RemoveContainer" containerID="31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81" Nov 23 06:49:48 crc kubenswrapper[4988]: E1123 06:49:48.125615 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81\": container with ID starting with 31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81 not found: ID does not exist" containerID="31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.125667 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81"} err="failed to get container status \"31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81\": rpc error: code = NotFound desc = could not find container \"31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81\": container with ID starting with 31f03264404d0aefda48f42c0f8ce221cedfb9bed37ca0bf2192a88c161f8e81 not found: ID does not exist" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.503036 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" path="/var/lib/kubelet/pods/2e5fbbbf-ab9f-494f-879f-b867959deb97/volumes" Nov 23 06:49:48 crc kubenswrapper[4988]: I1123 06:49:48.504847 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" path="/var/lib/kubelet/pods/d204a0c1-c9fb-436c-84e1-458826c49395/volumes" Nov 23 06:50:30 crc kubenswrapper[4988]: I1123 06:50:30.856941 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:50:48 crc kubenswrapper[4988]: I1123 06:50:48.259378 4988 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 23 06:50:55 crc kubenswrapper[4988]: I1123 06:50:55.886875 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" podUID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" containerName="oauth-openshift" containerID="cri-o://4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2" gracePeriod=15 Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.275622 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318383 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8445cf6b-jvv6t"] Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318743 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318766 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318781 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318792 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318826 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318835 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318844 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318853 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318863 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318871 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318892 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318901 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="extract-content" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318914 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274601f1-67ae-4d87-af93-385ddbeedf82" containerName="pruner" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318922 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="274601f1-67ae-4d87-af93-385ddbeedf82" containerName="pruner" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318931 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318940 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318951 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318962 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318973 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.318981 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="extract-utilities" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.318995 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319002 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.319012 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" containerName="oauth-openshift" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319021 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" containerName="oauth-openshift" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.319037 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319044 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.319055 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319063 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319228 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="274601f1-67ae-4d87-af93-385ddbeedf82" containerName="pruner" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319246 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" containerName="oauth-openshift" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319259 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5fbbbf-ab9f-494f-879f-b867959deb97" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319272 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed93687f-5cdf-4367-9cff-93b404983ba1" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319283 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6214e9a6-9472-42b7-be56-cd88296cc134" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319296 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d204a0c1-c9fb-436c-84e1-458826c49395" containerName="registry-server" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.319866 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.322544 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8445cf6b-jvv6t"] Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.391930 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392032 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392081 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392115 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392180 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392258 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392296 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392323 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392353 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctbb\" (UniqueName: \"kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392404 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392439 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392464 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392496 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392527 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig\") pod \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\" (UID: \"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216\") " Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.392590 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.393079 4988 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.393324 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.393591 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.393671 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.393877 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.398301 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.399995 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.400215 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb" (OuterVolumeSpecName: "kube-api-access-rctbb") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "kube-api-access-rctbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.400806 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.405416 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.406139 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.406150 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.406407 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.406896 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" (UID: "c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.422317 4988 generic.go:334] "Generic (PLEG): container finished" podID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" containerID="4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2" exitCode=0 Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.422372 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" event={"ID":"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216","Type":"ContainerDied","Data":"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2"} Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.422418 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" event={"ID":"c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216","Type":"ContainerDied","Data":"b0ac5d057602a0727c6297344f667b88a9374d0040264c2f3bb680a398c54b9c"} Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.422420 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xg5r6" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.422447 4988 scope.go:117] "RemoveContainer" containerID="4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.456521 4988 scope.go:117] "RemoveContainer" containerID="4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.457024 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:50:56 crc kubenswrapper[4988]: E1123 06:50:56.457589 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2\": container with ID starting with 4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2 not found: ID does not exist" containerID="4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.457648 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2"} err="failed to get container status \"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2\": rpc error: code = NotFound desc = could not find container \"4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2\": container with ID starting with 4322cdb74bc83eeb147e8d5cdfa08b5bea173fabb5819665fd0d5c2d9699c7d2 not found: ID does not exist" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.461413 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xg5r6"] Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494332 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-login\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494404 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494448 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-session\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494478 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494503 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494529 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494738 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-policies\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494879 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.494954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-error\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495006 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-dir\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495056 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495089 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2kj\" (UniqueName: \"kubernetes.io/projected/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-kube-api-access-mb2kj\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495162 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495278 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495298 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495368 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495416 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495437 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495489 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495513 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctbb\" (UniqueName: \"kubernetes.io/projected/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-kube-api-access-rctbb\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495530 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495548 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495563 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495579 4988 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495591 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.495604 4988 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.503086 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216" path="/var/lib/kubelet/pods/c3ff29d2-357b-4ac7-9b6c-cfcd50ba3216/volumes" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597093 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-login\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597298 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-session\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597353 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597405 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597458 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597541 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-policies\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597614 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597702 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-error\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597737 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-dir\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597779 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.597821 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb2kj\" (UniqueName: \"kubernetes.io/projected/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-kube-api-access-mb2kj\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.598146 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.598596 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-policies\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.598876 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-audit-dir\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.599038 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.599656 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-service-ca\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.603481 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.603511 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.603633 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-router-certs\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.604626 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.612954 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-error\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.613024 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-user-template-login\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.614940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.615284 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-v4-0-config-system-session\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.622589 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb2kj\" (UniqueName: \"kubernetes.io/projected/c6ad41c6-56bc-408f-abfc-d7e5500bc9f3-kube-api-access-mb2kj\") pod \"oauth-openshift-8445cf6b-jvv6t\" (UID: \"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3\") " pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.635685 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:56 crc kubenswrapper[4988]: I1123 06:50:56.877734 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8445cf6b-jvv6t"] Nov 23 06:50:57 crc kubenswrapper[4988]: I1123 06:50:57.428560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" event={"ID":"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3","Type":"ContainerStarted","Data":"e4908d5c01022859f54d21fefc62ce65903f94f691f3f87412274273550cd742"} Nov 23 06:50:57 crc kubenswrapper[4988]: I1123 06:50:57.428605 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" event={"ID":"c6ad41c6-56bc-408f-abfc-d7e5500bc9f3","Type":"ContainerStarted","Data":"cfda09cc1cd439dd7ffe1f9dfa663a08a01d1f276b1351dc3f4267a127f92594"} Nov 23 06:50:57 crc kubenswrapper[4988]: I1123 06:50:57.428736 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:50:57 crc kubenswrapper[4988]: I1123 06:50:57.456416 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" podStartSLOduration=27.456399254 podStartE2EDuration="27.456399254s" podCreationTimestamp="2025-11-23 06:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:50:57.45260407 +0000 UTC m=+309.761116893" watchObservedRunningTime="2025-11-23 06:50:57.456399254 +0000 UTC m=+309.764912017" Nov 23 06:50:57 crc kubenswrapper[4988]: I1123 06:50:57.782107 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8445cf6b-jvv6t" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.629919 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.630944 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5gwgn" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="registry-server" containerID="cri-o://22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f" gracePeriod=30 Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.645051 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.646677 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gcsbm" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="registry-server" containerID="cri-o://2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9" gracePeriod=30 Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.667886 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.668290 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" containerID="cri-o://d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9" gracePeriod=30 Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.680478 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.680896 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5z4ss" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="registry-server" containerID="cri-o://7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a" gracePeriod=30 Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.687054 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.687421 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xw2hq" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="registry-server" containerID="cri-o://38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff" gracePeriod=30 Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.689871 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g4fsw"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.691000 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.701522 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g4fsw"] Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.825948 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.826008 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-kube-api-access-xhpqg\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.826055 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.927318 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.927428 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.927455 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-kube-api-access-xhpqg\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.930009 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.937997 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:05 crc kubenswrapper[4988]: I1123 06:51:05.945851 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhpqg\" (UniqueName: \"kubernetes.io/projected/dcf0e02c-4654-4fe4-aedb-3817fd1d4221-kube-api-access-xhpqg\") pod \"marketplace-operator-79b997595-g4fsw\" (UID: \"dcf0e02c-4654-4fe4-aedb-3817fd1d4221\") " pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.014341 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.149634 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.156035 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.160494 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.172063 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.186666 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236380 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgg5f\" (UniqueName: \"kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f\") pod \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236462 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content\") pod \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236496 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities\") pod \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236544 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content\") pod \"9569d22d-764e-44fd-a6ff-6266c766b304\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236585 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content\") pod \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236614 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb5x5\" (UniqueName: \"kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5\") pod \"9569d22d-764e-44fd-a6ff-6266c766b304\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236646 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics\") pod \"f85d2cce-f57e-4242-8122-5ca62637c30d\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236674 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfq9g\" (UniqueName: \"kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g\") pod \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236706 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfdmg\" (UniqueName: \"kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg\") pod \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\" (UID: \"fc4d3d52-3334-454d-8ea9-3acc065a17b3\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236736 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities\") pod \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\" (UID: \"1d6d0eff-3d7a-4913-996a-d7db0261b1d7\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236767 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities\") pod \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236793 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m64dn\" (UniqueName: \"kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn\") pod \"f85d2cce-f57e-4242-8122-5ca62637c30d\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236838 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content\") pod \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\" (UID: \"20a119bf-0e89-4c4b-8502-bf5d7759a95d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236872 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca\") pod \"f85d2cce-f57e-4242-8122-5ca62637c30d\" (UID: \"f85d2cce-f57e-4242-8122-5ca62637c30d\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.236898 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities\") pod \"9569d22d-764e-44fd-a6ff-6266c766b304\" (UID: \"9569d22d-764e-44fd-a6ff-6266c766b304\") " Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.238635 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities" (OuterVolumeSpecName: "utilities") pod "9569d22d-764e-44fd-a6ff-6266c766b304" (UID: "9569d22d-764e-44fd-a6ff-6266c766b304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.242284 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g" (OuterVolumeSpecName: "kube-api-access-mfq9g") pod "1d6d0eff-3d7a-4913-996a-d7db0261b1d7" (UID: "1d6d0eff-3d7a-4913-996a-d7db0261b1d7"). InnerVolumeSpecName "kube-api-access-mfq9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.243360 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn" (OuterVolumeSpecName: "kube-api-access-m64dn") pod "f85d2cce-f57e-4242-8122-5ca62637c30d" (UID: "f85d2cce-f57e-4242-8122-5ca62637c30d"). InnerVolumeSpecName "kube-api-access-m64dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.244380 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities" (OuterVolumeSpecName: "utilities") pod "20a119bf-0e89-4c4b-8502-bf5d7759a95d" (UID: "20a119bf-0e89-4c4b-8502-bf5d7759a95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.248013 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities" (OuterVolumeSpecName: "utilities") pod "1d6d0eff-3d7a-4913-996a-d7db0261b1d7" (UID: "1d6d0eff-3d7a-4913-996a-d7db0261b1d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.254161 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities" (OuterVolumeSpecName: "utilities") pod "fc4d3d52-3334-454d-8ea9-3acc065a17b3" (UID: "fc4d3d52-3334-454d-8ea9-3acc065a17b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.254298 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f85d2cce-f57e-4242-8122-5ca62637c30d" (UID: "f85d2cce-f57e-4242-8122-5ca62637c30d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.269264 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d6d0eff-3d7a-4913-996a-d7db0261b1d7" (UID: "1d6d0eff-3d7a-4913-996a-d7db0261b1d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.274214 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g4fsw"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.295490 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f85d2cce-f57e-4242-8122-5ca62637c30d" (UID: "f85d2cce-f57e-4242-8122-5ca62637c30d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.296462 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f" (OuterVolumeSpecName: "kube-api-access-hgg5f") pod "20a119bf-0e89-4c4b-8502-bf5d7759a95d" (UID: "20a119bf-0e89-4c4b-8502-bf5d7759a95d"). InnerVolumeSpecName "kube-api-access-hgg5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.296600 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5" (OuterVolumeSpecName: "kube-api-access-nb5x5") pod "9569d22d-764e-44fd-a6ff-6266c766b304" (UID: "9569d22d-764e-44fd-a6ff-6266c766b304"). InnerVolumeSpecName "kube-api-access-nb5x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.297721 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20a119bf-0e89-4c4b-8502-bf5d7759a95d" (UID: "20a119bf-0e89-4c4b-8502-bf5d7759a95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.298940 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg" (OuterVolumeSpecName: "kube-api-access-dfdmg") pod "fc4d3d52-3334-454d-8ea9-3acc065a17b3" (UID: "fc4d3d52-3334-454d-8ea9-3acc065a17b3"). InnerVolumeSpecName "kube-api-access-dfdmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338089 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338132 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338145 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m64dn\" (UniqueName: \"kubernetes.io/projected/f85d2cce-f57e-4242-8122-5ca62637c30d-kube-api-access-m64dn\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338158 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20a119bf-0e89-4c4b-8502-bf5d7759a95d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338170 4988 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338181 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338205 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgg5f\" (UniqueName: \"kubernetes.io/projected/20a119bf-0e89-4c4b-8502-bf5d7759a95d-kube-api-access-hgg5f\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338219 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338232 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338244 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb5x5\" (UniqueName: \"kubernetes.io/projected/9569d22d-764e-44fd-a6ff-6266c766b304-kube-api-access-nb5x5\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338256 4988 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f85d2cce-f57e-4242-8122-5ca62637c30d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338269 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfq9g\" (UniqueName: \"kubernetes.io/projected/1d6d0eff-3d7a-4913-996a-d7db0261b1d7-kube-api-access-mfq9g\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338282 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfdmg\" (UniqueName: \"kubernetes.io/projected/fc4d3d52-3334-454d-8ea9-3acc065a17b3-kube-api-access-dfdmg\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.338294 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc4d3d52-3334-454d-8ea9-3acc065a17b3" (UID: "fc4d3d52-3334-454d-8ea9-3acc065a17b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.356551 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9569d22d-764e-44fd-a6ff-6266c766b304" (UID: "9569d22d-764e-44fd-a6ff-6266c766b304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.439933 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9569d22d-764e-44fd-a6ff-6266c766b304-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.440217 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc4d3d52-3334-454d-8ea9-3acc065a17b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.486759 4988 generic.go:334] "Generic (PLEG): container finished" podID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerID="d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9" exitCode=0 Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.486891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" event={"ID":"f85d2cce-f57e-4242-8122-5ca62637c30d","Type":"ContainerDied","Data":"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.486964 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" event={"ID":"f85d2cce-f57e-4242-8122-5ca62637c30d","Type":"ContainerDied","Data":"d9bfcaf933b294d5df991952bb9dd21392dafe9b8d61efa2a67ee483548da186"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.487016 4988 scope.go:117] "RemoveContainer" containerID="d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.487377 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxdnl" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.492662 4988 generic.go:334] "Generic (PLEG): container finished" podID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerID="2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9" exitCode=0 Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.492753 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerDied","Data":"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.492759 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcsbm" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.492793 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcsbm" event={"ID":"fc4d3d52-3334-454d-8ea9-3acc065a17b3","Type":"ContainerDied","Data":"09c0674125028dfbc0558eba764d4e1b286fbdc8fea0b36b87b73debf4499625"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.497009 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerID="7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a" exitCode=0 Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.497174 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5z4ss" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.504066 4988 generic.go:334] "Generic (PLEG): container finished" podID="9569d22d-764e-44fd-a6ff-6266c766b304" containerID="38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff" exitCode=0 Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.504179 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw2hq" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.508921 4988 generic.go:334] "Generic (PLEG): container finished" podID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerID="22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f" exitCode=0 Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.509146 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gwgn" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510451 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerDied","Data":"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5z4ss" event={"ID":"1d6d0eff-3d7a-4913-996a-d7db0261b1d7","Type":"ContainerDied","Data":"66e7417146ffcb630f5be67573e14834b476ae5c8223e5f167bd07c32a7f9624"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510583 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerDied","Data":"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510601 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw2hq" event={"ID":"9569d22d-764e-44fd-a6ff-6266c766b304","Type":"ContainerDied","Data":"43b4901dd65b24246f39df699f92e7a21529d876107ab8f4bcf3ddfd4584e090"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510618 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerDied","Data":"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.510636 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gwgn" event={"ID":"20a119bf-0e89-4c4b-8502-bf5d7759a95d","Type":"ContainerDied","Data":"4eea5382c1165642ceee879c657f864f2fe0fcdb4721fb6d7408a2df1a4f5b54"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.512348 4988 scope.go:117] "RemoveContainer" containerID="d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.513341 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9\": container with ID starting with d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9 not found: ID does not exist" containerID="d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.513404 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9"} err="failed to get container status \"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9\": rpc error: code = NotFound desc = could not find container \"d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9\": container with ID starting with d41b4d415040d9ef0f14e8bec91de4d09c3cc1e1ec689e655cc831498580b5e9 not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.513439 4988 scope.go:117] "RemoveContainer" containerID="2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.515133 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" event={"ID":"dcf0e02c-4654-4fe4-aedb-3817fd1d4221","Type":"ContainerStarted","Data":"7edddbd5463de6ed378e1e406f510ee5af423a40e2c9db77c3e38728df762dbf"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.515186 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" event={"ID":"dcf0e02c-4654-4fe4-aedb-3817fd1d4221","Type":"ContainerStarted","Data":"f570985c25078fb58bd1c5d6f7b16d8314d54c71186b87509945a2b2d3ec6bc1"} Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.515637 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.517112 4988 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g4fsw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.517168 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" podUID="dcf0e02c-4654-4fe4-aedb-3817fd1d4221" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.537826 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" podStartSLOduration=1.537796886 podStartE2EDuration="1.537796886s" podCreationTimestamp="2025-11-23 06:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:51:06.536093744 +0000 UTC m=+318.844606507" watchObservedRunningTime="2025-11-23 06:51:06.537796886 +0000 UTC m=+318.846309649" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.553097 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.557937 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxdnl"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.559370 4988 scope.go:117] "RemoveContainer" containerID="762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.583105 4988 scope.go:117] "RemoveContainer" containerID="77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.593905 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.611637 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gcsbm"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.620061 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.624030 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5gwgn"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.628372 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.633401 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5z4ss"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.638728 4988 scope.go:117] "RemoveContainer" containerID="2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.640884 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9\": container with ID starting with 2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9 not found: ID does not exist" containerID="2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.640924 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9"} err="failed to get container status \"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9\": rpc error: code = NotFound desc = could not find container \"2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9\": container with ID starting with 2b6081ee617fbafe30abbcb7f6357620912d06312730c70c5e2e98fd9b0a0fe9 not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.640954 4988 scope.go:117] "RemoveContainer" containerID="762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.641015 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.641591 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d\": container with ID starting with 762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d not found: ID does not exist" containerID="762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.641648 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d"} err="failed to get container status \"762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d\": rpc error: code = NotFound desc = could not find container \"762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d\": container with ID starting with 762327a9eac09ef2c07fba0556ee9e1a7f00fdd92c0fdb0ade05cc61f14f6a6d not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.641665 4988 scope.go:117] "RemoveContainer" containerID="77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.642143 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54\": container with ID starting with 77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54 not found: ID does not exist" containerID="77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.642166 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54"} err="failed to get container status \"77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54\": rpc error: code = NotFound desc = could not find container \"77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54\": container with ID starting with 77356994bdb9210645945d9d6e1bd28b2e27a366dccfdf03f7b10a600fc84f54 not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.642180 4988 scope.go:117] "RemoveContainer" containerID="7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.645773 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xw2hq"] Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.658018 4988 scope.go:117] "RemoveContainer" containerID="7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.671403 4988 scope.go:117] "RemoveContainer" containerID="ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.688516 4988 scope.go:117] "RemoveContainer" containerID="7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.689152 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a\": container with ID starting with 7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a not found: ID does not exist" containerID="7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689248 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a"} err="failed to get container status \"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a\": rpc error: code = NotFound desc = could not find container \"7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a\": container with ID starting with 7b8dbe61cfc404411d50d6d93b680073a458a6b984fbcaaf800e30d5fb48354a not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689304 4988 scope.go:117] "RemoveContainer" containerID="7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.689626 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007\": container with ID starting with 7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007 not found: ID does not exist" containerID="7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689658 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007"} err="failed to get container status \"7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007\": rpc error: code = NotFound desc = could not find container \"7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007\": container with ID starting with 7724aafb2f038f1e8cedd1ba308e111ee8ca88fca0f67467ee117aa337690007 not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689676 4988 scope.go:117] "RemoveContainer" containerID="ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.689911 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e\": container with ID starting with ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e not found: ID does not exist" containerID="ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689942 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e"} err="failed to get container status \"ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e\": rpc error: code = NotFound desc = could not find container \"ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e\": container with ID starting with ac27cbb1d97025650120af0687d6f757397eb0c33a1aea38ffc990383b15bb9e not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.689964 4988 scope.go:117] "RemoveContainer" containerID="38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.719649 4988 scope.go:117] "RemoveContainer" containerID="23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.737298 4988 scope.go:117] "RemoveContainer" containerID="36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.750086 4988 scope.go:117] "RemoveContainer" containerID="38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.750701 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff\": container with ID starting with 38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff not found: ID does not exist" containerID="38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.750744 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff"} err="failed to get container status \"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff\": rpc error: code = NotFound desc = could not find container \"38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff\": container with ID starting with 38c6eb04bf3327edf4be74b9006beb41272f2e9066e23365e68ef5f45ddeeaff not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.750785 4988 scope.go:117] "RemoveContainer" containerID="23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.751106 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8\": container with ID starting with 23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8 not found: ID does not exist" containerID="23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.751162 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8"} err="failed to get container status \"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8\": rpc error: code = NotFound desc = could not find container \"23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8\": container with ID starting with 23214c88750a71ed777c4940b27ca6a781c2da1c5ccf86f3f534452a727c55a8 not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.751184 4988 scope.go:117] "RemoveContainer" containerID="36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.751541 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d\": container with ID starting with 36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d not found: ID does not exist" containerID="36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.751563 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d"} err="failed to get container status \"36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d\": rpc error: code = NotFound desc = could not find container \"36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d\": container with ID starting with 36bc99ef8d00daaea65858144db9375fa5838f6558598981eeb62f8186612c0d not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.751577 4988 scope.go:117] "RemoveContainer" containerID="22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.765160 4988 scope.go:117] "RemoveContainer" containerID="0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.779555 4988 scope.go:117] "RemoveContainer" containerID="44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.793467 4988 scope.go:117] "RemoveContainer" containerID="22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.793946 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f\": container with ID starting with 22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f not found: ID does not exist" containerID="22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.794011 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f"} err="failed to get container status \"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f\": rpc error: code = NotFound desc = could not find container \"22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f\": container with ID starting with 22d2945dd114382b17c2a9ecf90082bc4a9d0cd68d34ff1486623eff4162838f not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.794058 4988 scope.go:117] "RemoveContainer" containerID="0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.794472 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d\": container with ID starting with 0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d not found: ID does not exist" containerID="0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.794520 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d"} err="failed to get container status \"0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d\": rpc error: code = NotFound desc = could not find container \"0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d\": container with ID starting with 0399448b3ef4e38c8b4e0656aec2b4ad1291ff7bad99266ca5287aaeac354b3d not found: ID does not exist" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.794559 4988 scope.go:117] "RemoveContainer" containerID="44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2" Nov 23 06:51:06 crc kubenswrapper[4988]: E1123 06:51:06.795502 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2\": container with ID starting with 44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2 not found: ID does not exist" containerID="44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2" Nov 23 06:51:06 crc kubenswrapper[4988]: I1123 06:51:06.795538 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2"} err="failed to get container status \"44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2\": rpc error: code = NotFound desc = could not find container \"44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2\": container with ID starting with 44c1e69bf478ab64f3468ad926439ce7a77c5b698b3dc9208506448d6e98cab2 not found: ID does not exist" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.532643 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g4fsw" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.861628 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gpdnr"] Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.864286 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.864476 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.864601 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.864726 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.864843 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.864966 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.865087 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.865220 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.865384 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.865529 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.865738 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.865887 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.866021 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.866135 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.866293 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.867006 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="extract-utilities" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.867185 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.867343 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.867463 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.867599 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.867733 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.867907 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.868043 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.868169 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="extract-content" Nov 23 06:51:07 crc kubenswrapper[4988]: E1123 06:51:07.868324 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.868438 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.868768 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" containerName="marketplace-operator" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.868910 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.869046 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.869170 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.869371 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" containerName="registry-server" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.871823 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.874958 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:51:07 crc kubenswrapper[4988]: I1123 06:51:07.875710 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpdnr"] Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.060855 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-catalog-content\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.061010 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4x88\" (UniqueName: \"kubernetes.io/projected/6a120914-49bb-4c6c-9b35-7e89e7749110-kube-api-access-s4x88\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.061092 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-utilities\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.063222 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9rjm"] Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.064531 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.067876 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.074625 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9rjm"] Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.162867 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-catalog-content\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.162926 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4x88\" (UniqueName: \"kubernetes.io/projected/6a120914-49bb-4c6c-9b35-7e89e7749110-kube-api-access-s4x88\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.162955 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-utilities\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.163505 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-catalog-content\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.163540 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a120914-49bb-4c6c-9b35-7e89e7749110-utilities\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.193017 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4x88\" (UniqueName: \"kubernetes.io/projected/6a120914-49bb-4c6c-9b35-7e89e7749110-kube-api-access-s4x88\") pod \"redhat-marketplace-gpdnr\" (UID: \"6a120914-49bb-4c6c-9b35-7e89e7749110\") " pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.198482 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.264379 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-utilities\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.264792 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-catalog-content\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.264823 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6tn\" (UniqueName: \"kubernetes.io/projected/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-kube-api-access-hj6tn\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.366494 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-catalog-content\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.366558 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6tn\" (UniqueName: \"kubernetes.io/projected/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-kube-api-access-hj6tn\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.366658 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-utilities\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.367212 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-utilities\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.367211 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-catalog-content\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.402323 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6tn\" (UniqueName: \"kubernetes.io/projected/8e2aaec8-1a4e-4655-ba7f-ce2d2065920d-kube-api-access-hj6tn\") pod \"redhat-operators-k9rjm\" (UID: \"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d\") " pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.502607 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d6d0eff-3d7a-4913-996a-d7db0261b1d7" path="/var/lib/kubelet/pods/1d6d0eff-3d7a-4913-996a-d7db0261b1d7/volumes" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.503414 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a119bf-0e89-4c4b-8502-bf5d7759a95d" path="/var/lib/kubelet/pods/20a119bf-0e89-4c4b-8502-bf5d7759a95d/volumes" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.504154 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9569d22d-764e-44fd-a6ff-6266c766b304" path="/var/lib/kubelet/pods/9569d22d-764e-44fd-a6ff-6266c766b304/volumes" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.505536 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85d2cce-f57e-4242-8122-5ca62637c30d" path="/var/lib/kubelet/pods/f85d2cce-f57e-4242-8122-5ca62637c30d/volumes" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.506113 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc4d3d52-3334-454d-8ea9-3acc065a17b3" path="/var/lib/kubelet/pods/fc4d3d52-3334-454d-8ea9-3acc065a17b3/volumes" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.615873 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpdnr"] Nov 23 06:51:08 crc kubenswrapper[4988]: W1123 06:51:08.625609 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a120914_49bb_4c6c_9b35_7e89e7749110.slice/crio-3d386a7cdfb4346958be7f898b1bd37c00706529ce3755997122983fb77b8466 WatchSource:0}: Error finding container 3d386a7cdfb4346958be7f898b1bd37c00706529ce3755997122983fb77b8466: Status 404 returned error can't find the container with id 3d386a7cdfb4346958be7f898b1bd37c00706529ce3755997122983fb77b8466 Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.696630 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:08 crc kubenswrapper[4988]: I1123 06:51:08.926216 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9rjm"] Nov 23 06:51:08 crc kubenswrapper[4988]: W1123 06:51:08.931933 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e2aaec8_1a4e_4655_ba7f_ce2d2065920d.slice/crio-fe9b7b571df345c6e8acc6c32f1b1694c12d8d86544fa5ebe3bba35c7c49053b WatchSource:0}: Error finding container fe9b7b571df345c6e8acc6c32f1b1694c12d8d86544fa5ebe3bba35c7c49053b: Status 404 returned error can't find the container with id fe9b7b571df345c6e8acc6c32f1b1694c12d8d86544fa5ebe3bba35c7c49053b Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.544140 4988 generic.go:334] "Generic (PLEG): container finished" podID="8e2aaec8-1a4e-4655-ba7f-ce2d2065920d" containerID="6acbb3b420e26ed0a6ec2df59bc1e3a4ca8b2bd29246d8a64272cc38a9898ada" exitCode=0 Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.544293 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9rjm" event={"ID":"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d","Type":"ContainerDied","Data":"6acbb3b420e26ed0a6ec2df59bc1e3a4ca8b2bd29246d8a64272cc38a9898ada"} Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.544690 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9rjm" event={"ID":"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d","Type":"ContainerStarted","Data":"fe9b7b571df345c6e8acc6c32f1b1694c12d8d86544fa5ebe3bba35c7c49053b"} Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.549497 4988 generic.go:334] "Generic (PLEG): container finished" podID="6a120914-49bb-4c6c-9b35-7e89e7749110" containerID="774000b7ae81df6042f2a54fcffe06d74ccb4d317170194a39efbe321fc36ed7" exitCode=0 Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.549531 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpdnr" event={"ID":"6a120914-49bb-4c6c-9b35-7e89e7749110","Type":"ContainerDied","Data":"774000b7ae81df6042f2a54fcffe06d74ccb4d317170194a39efbe321fc36ed7"} Nov 23 06:51:09 crc kubenswrapper[4988]: I1123 06:51:09.549556 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpdnr" event={"ID":"6a120914-49bb-4c6c-9b35-7e89e7749110","Type":"ContainerStarted","Data":"3d386a7cdfb4346958be7f898b1bd37c00706529ce3755997122983fb77b8466"} Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.259105 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.260134 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.263374 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.270541 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.291295 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.292011 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2w9z\" (UniqueName: \"kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.292127 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.393622 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.393727 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.393782 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2w9z\" (UniqueName: \"kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.394676 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.396422 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.420883 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2w9z\" (UniqueName: \"kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z\") pod \"community-operators-7jb5m\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.456211 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-75tvw"] Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.457451 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.460281 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.472336 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75tvw"] Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.494384 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-catalog-content\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.494431 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-utilities\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.494457 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6mzt\" (UniqueName: \"kubernetes.io/projected/48ee2284-9e6c-4049-bf43-9473c176ca62-kube-api-access-b6mzt\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.592972 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.595685 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-catalog-content\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.595748 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-utilities\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.595800 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6mzt\" (UniqueName: \"kubernetes.io/projected/48ee2284-9e6c-4049-bf43-9473c176ca62-kube-api-access-b6mzt\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.596257 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-catalog-content\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.596269 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ee2284-9e6c-4049-bf43-9473c176ca62-utilities\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.618543 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6mzt\" (UniqueName: \"kubernetes.io/projected/48ee2284-9e6c-4049-bf43-9473c176ca62-kube-api-access-b6mzt\") pod \"certified-operators-75tvw\" (UID: \"48ee2284-9e6c-4049-bf43-9473c176ca62\") " pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.772052 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.789679 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:10 crc kubenswrapper[4988]: I1123 06:51:10.993718 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75tvw"] Nov 23 06:51:11 crc kubenswrapper[4988]: I1123 06:51:11.566792 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerStarted","Data":"fbc55164b9de17f1f951967353678e4fa2e00be2274ef43829bc7add273e6151"} Nov 23 06:51:11 crc kubenswrapper[4988]: I1123 06:51:11.569168 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75tvw" event={"ID":"48ee2284-9e6c-4049-bf43-9473c176ca62","Type":"ContainerStarted","Data":"fe111d9a63a3d508d9fc880b57baddc7bbf591e5fc066cf42a10571d023e3f2f"} Nov 23 06:51:12 crc kubenswrapper[4988]: I1123 06:51:12.575921 4988 generic.go:334] "Generic (PLEG): container finished" podID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerID="597b114f4e8e69b912b9d346f30e68abbd144582f180126eaf9b784e82c425f1" exitCode=0 Nov 23 06:51:12 crc kubenswrapper[4988]: I1123 06:51:12.575990 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerDied","Data":"597b114f4e8e69b912b9d346f30e68abbd144582f180126eaf9b784e82c425f1"} Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.584358 4988 generic.go:334] "Generic (PLEG): container finished" podID="8e2aaec8-1a4e-4655-ba7f-ce2d2065920d" containerID="309110fdd697517b94e14e2dc32eb4053a897f73b5509c791afa92279d73d5bc" exitCode=0 Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.584480 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9rjm" event={"ID":"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d","Type":"ContainerDied","Data":"309110fdd697517b94e14e2dc32eb4053a897f73b5509c791afa92279d73d5bc"} Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.587263 4988 generic.go:334] "Generic (PLEG): container finished" podID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerID="f4b09f8b08292085eb13e84e0b106be140e691ffa4042f453c1c61a84eee00f2" exitCode=0 Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.587413 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerDied","Data":"f4b09f8b08292085eb13e84e0b106be140e691ffa4042f453c1c61a84eee00f2"} Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.591769 4988 generic.go:334] "Generic (PLEG): container finished" podID="48ee2284-9e6c-4049-bf43-9473c176ca62" containerID="24dcf40a510eeffce4a2bd3b62e5193a213d841f49da99fd1dee56cb7878dab0" exitCode=0 Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.591861 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75tvw" event={"ID":"48ee2284-9e6c-4049-bf43-9473c176ca62","Type":"ContainerDied","Data":"24dcf40a510eeffce4a2bd3b62e5193a213d841f49da99fd1dee56cb7878dab0"} Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.598358 4988 generic.go:334] "Generic (PLEG): container finished" podID="6a120914-49bb-4c6c-9b35-7e89e7749110" containerID="9eb395d1d49c43dbf941bbcc4e9c0ec45b436e1bcfb07878769e2b23a1f22fc5" exitCode=0 Nov 23 06:51:13 crc kubenswrapper[4988]: I1123 06:51:13.598429 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpdnr" event={"ID":"6a120914-49bb-4c6c-9b35-7e89e7749110","Type":"ContainerDied","Data":"9eb395d1d49c43dbf941bbcc4e9c0ec45b436e1bcfb07878769e2b23a1f22fc5"} Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.605064 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpdnr" event={"ID":"6a120914-49bb-4c6c-9b35-7e89e7749110","Type":"ContainerStarted","Data":"4b1e00ae93c320d50a1d1abc47b46c66b8fdccb8b65caf37e0870f29dcde263f"} Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.607995 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9rjm" event={"ID":"8e2aaec8-1a4e-4655-ba7f-ce2d2065920d","Type":"ContainerStarted","Data":"e0ee7c02ec7c2f464c96cf9e5c3b727ffcc255c008ee28e3dfee79fb1dc8b62e"} Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.609916 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerStarted","Data":"1b276497d5c675685f70e2bb62342cba771c77e0ae43c381262f1bde98310661"} Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.611571 4988 generic.go:334] "Generic (PLEG): container finished" podID="48ee2284-9e6c-4049-bf43-9473c176ca62" containerID="1f2cee98ad961987eb792d9df83bc2813dd47cfeed8176fb9a4355047725eaec" exitCode=0 Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.611593 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75tvw" event={"ID":"48ee2284-9e6c-4049-bf43-9473c176ca62","Type":"ContainerDied","Data":"1f2cee98ad961987eb792d9df83bc2813dd47cfeed8176fb9a4355047725eaec"} Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.620595 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gpdnr" podStartSLOduration=3.138302435 podStartE2EDuration="7.620573643s" podCreationTimestamp="2025-11-23 06:51:07 +0000 UTC" firstStartedPulling="2025-11-23 06:51:09.550909289 +0000 UTC m=+321.859422062" lastFinishedPulling="2025-11-23 06:51:14.033180497 +0000 UTC m=+326.341693270" observedRunningTime="2025-11-23 06:51:14.619339843 +0000 UTC m=+326.927852606" watchObservedRunningTime="2025-11-23 06:51:14.620573643 +0000 UTC m=+326.929086406" Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.634895 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9rjm" podStartSLOduration=2.106630112 podStartE2EDuration="6.634879368s" podCreationTimestamp="2025-11-23 06:51:08 +0000 UTC" firstStartedPulling="2025-11-23 06:51:09.546180322 +0000 UTC m=+321.854693125" lastFinishedPulling="2025-11-23 06:51:14.074429618 +0000 UTC m=+326.382942381" observedRunningTime="2025-11-23 06:51:14.633660157 +0000 UTC m=+326.942172940" watchObservedRunningTime="2025-11-23 06:51:14.634879368 +0000 UTC m=+326.943392131" Nov 23 06:51:14 crc kubenswrapper[4988]: I1123 06:51:14.667363 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jb5m" podStartSLOduration=3.372081783 podStartE2EDuration="4.667332971s" podCreationTimestamp="2025-11-23 06:51:10 +0000 UTC" firstStartedPulling="2025-11-23 06:51:12.663524776 +0000 UTC m=+324.972037539" lastFinishedPulling="2025-11-23 06:51:13.958775964 +0000 UTC m=+326.267288727" observedRunningTime="2025-11-23 06:51:14.665384923 +0000 UTC m=+326.973897696" watchObservedRunningTime="2025-11-23 06:51:14.667332971 +0000 UTC m=+326.975845734" Nov 23 06:51:16 crc kubenswrapper[4988]: I1123 06:51:16.624697 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75tvw" event={"ID":"48ee2284-9e6c-4049-bf43-9473c176ca62","Type":"ContainerStarted","Data":"3d9c47cea2d5c1b730aa064f011e4d42a1be4a280e4970e9f3d4c0f294c40db5"} Nov 23 06:51:16 crc kubenswrapper[4988]: I1123 06:51:16.642806 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-75tvw" podStartSLOduration=5.215669262 podStartE2EDuration="6.642790236s" podCreationTimestamp="2025-11-23 06:51:10 +0000 UTC" firstStartedPulling="2025-11-23 06:51:13.593048186 +0000 UTC m=+325.901560949" lastFinishedPulling="2025-11-23 06:51:15.02016916 +0000 UTC m=+327.328681923" observedRunningTime="2025-11-23 06:51:16.64132156 +0000 UTC m=+328.949834323" watchObservedRunningTime="2025-11-23 06:51:16.642790236 +0000 UTC m=+328.951302999" Nov 23 06:51:18 crc kubenswrapper[4988]: I1123 06:51:18.199364 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:18 crc kubenswrapper[4988]: I1123 06:51:18.199597 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:18 crc kubenswrapper[4988]: I1123 06:51:18.252701 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:18 crc kubenswrapper[4988]: I1123 06:51:18.697226 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:18 crc kubenswrapper[4988]: I1123 06:51:18.697270 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:19 crc kubenswrapper[4988]: I1123 06:51:19.755382 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9rjm" podUID="8e2aaec8-1a4e-4655-ba7f-ce2d2065920d" containerName="registry-server" probeResult="failure" output=< Nov 23 06:51:19 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 06:51:19 crc kubenswrapper[4988]: > Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.594263 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.594795 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.649437 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.716582 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.791115 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.791219 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:20 crc kubenswrapper[4988]: I1123 06:51:20.837473 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:21 crc kubenswrapper[4988]: I1123 06:51:21.695227 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-75tvw" Nov 23 06:51:28 crc kubenswrapper[4988]: I1123 06:51:28.252550 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gpdnr" Nov 23 06:51:28 crc kubenswrapper[4988]: I1123 06:51:28.774313 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:28 crc kubenswrapper[4988]: I1123 06:51:28.821482 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9rjm" Nov 23 06:51:51 crc kubenswrapper[4988]: I1123 06:51:51.672879 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:51:51 crc kubenswrapper[4988]: I1123 06:51:51.673937 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.523483 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wkxnz"] Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.525337 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.540076 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wkxnz"] Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.681841 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-tls\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.681907 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-certificates\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.681931 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6jgc\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-kube-api-access-w6jgc\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.682002 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-bound-sa-token\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.682042 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.682081 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-trusted-ca\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.682103 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fcdf8f-414a-4674-8fcf-027ff8278cc1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.682157 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fcdf8f-414a-4674-8fcf-027ff8278cc1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.703930 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.783509 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6jgc\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-kube-api-access-w6jgc\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.783991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-bound-sa-token\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.784066 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-trusted-ca\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.784098 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fcdf8f-414a-4674-8fcf-027ff8278cc1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.784183 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fcdf8f-414a-4674-8fcf-027ff8278cc1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.784348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-tls\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.784388 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-certificates\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.785348 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fcdf8f-414a-4674-8fcf-027ff8278cc1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.786578 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-certificates\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.786770 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fcdf8f-414a-4674-8fcf-027ff8278cc1-trusted-ca\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.794882 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-registry-tls\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.795331 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fcdf8f-414a-4674-8fcf-027ff8278cc1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.825891 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-bound-sa-token\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.828691 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6jgc\" (UniqueName: \"kubernetes.io/projected/83fcdf8f-414a-4674-8fcf-027ff8278cc1-kube-api-access-w6jgc\") pod \"image-registry-66df7c8f76-wkxnz\" (UID: \"83fcdf8f-414a-4674-8fcf-027ff8278cc1\") " pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:02 crc kubenswrapper[4988]: I1123 06:52:02.896356 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:03 crc kubenswrapper[4988]: I1123 06:52:03.357756 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wkxnz"] Nov 23 06:52:03 crc kubenswrapper[4988]: I1123 06:52:03.944748 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" event={"ID":"83fcdf8f-414a-4674-8fcf-027ff8278cc1","Type":"ContainerStarted","Data":"e0a6db59cf6d8d67b802a53cb9604f580c19d5baed0022827aa2cd958306741a"} Nov 23 06:52:03 crc kubenswrapper[4988]: I1123 06:52:03.944831 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" event={"ID":"83fcdf8f-414a-4674-8fcf-027ff8278cc1","Type":"ContainerStarted","Data":"cb82197b1f9bc5749f62b4709b2cca800023bc52754e89c052742510772c250e"} Nov 23 06:52:03 crc kubenswrapper[4988]: I1123 06:52:03.944974 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:03 crc kubenswrapper[4988]: I1123 06:52:03.975845 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" podStartSLOduration=1.97578657 podStartE2EDuration="1.97578657s" podCreationTimestamp="2025-11-23 06:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:03.969221058 +0000 UTC m=+376.277733891" watchObservedRunningTime="2025-11-23 06:52:03.97578657 +0000 UTC m=+376.284299363" Nov 23 06:52:21 crc kubenswrapper[4988]: I1123 06:52:21.672509 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:52:21 crc kubenswrapper[4988]: I1123 06:52:21.673274 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:52:22 crc kubenswrapper[4988]: I1123 06:52:22.908323 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wkxnz" Nov 23 06:52:23 crc kubenswrapper[4988]: I1123 06:52:23.003328 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.055739 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" podUID="8693d0cd-897f-4bef-a923-783f1bf8c584" containerName="registry" containerID="cri-o://3f6397b885a9fb373eff14f67bf8813dd8b47b812c941c9f93e99a618b267be0" gracePeriod=30 Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.255824 4988 generic.go:334] "Generic (PLEG): container finished" podID="8693d0cd-897f-4bef-a923-783f1bf8c584" containerID="3f6397b885a9fb373eff14f67bf8813dd8b47b812c941c9f93e99a618b267be0" exitCode=0 Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.255908 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" event={"ID":"8693d0cd-897f-4bef-a923-783f1bf8c584","Type":"ContainerDied","Data":"3f6397b885a9fb373eff14f67bf8813dd8b47b812c941c9f93e99a618b267be0"} Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.443765 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504073 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504161 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504235 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504288 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504327 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504419 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504460 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwrlg\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.504500 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token\") pod \"8693d0cd-897f-4bef-a923-783f1bf8c584\" (UID: \"8693d0cd-897f-4bef-a923-783f1bf8c584\") " Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.505580 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.509474 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.510558 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg" (OuterVolumeSpecName: "kube-api-access-hwrlg") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "kube-api-access-hwrlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.510806 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.512087 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.513331 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.517379 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.527157 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8693d0cd-897f-4bef-a923-783f1bf8c584" (UID: "8693d0cd-897f-4bef-a923-783f1bf8c584"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606072 4988 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8693d0cd-897f-4bef-a923-783f1bf8c584-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606117 4988 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606128 4988 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606141 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8693d0cd-897f-4bef-a923-783f1bf8c584-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606151 4988 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8693d0cd-897f-4bef-a923-783f1bf8c584-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606162 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwrlg\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-kube-api-access-hwrlg\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:48 crc kubenswrapper[4988]: I1123 06:52:48.606173 4988 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8693d0cd-897f-4bef-a923-783f1bf8c584-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:49 crc kubenswrapper[4988]: I1123 06:52:49.266760 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" event={"ID":"8693d0cd-897f-4bef-a923-783f1bf8c584","Type":"ContainerDied","Data":"fe5d58dc68c5f407bc14d49ac4b905c4488cba795137889844bd8e76e594d8c5"} Nov 23 06:52:49 crc kubenswrapper[4988]: I1123 06:52:49.267431 4988 scope.go:117] "RemoveContainer" containerID="3f6397b885a9fb373eff14f67bf8813dd8b47b812c941c9f93e99a618b267be0" Nov 23 06:52:49 crc kubenswrapper[4988]: I1123 06:52:49.266806 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-c89ht" Nov 23 06:52:49 crc kubenswrapper[4988]: I1123 06:52:49.301304 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:52:49 crc kubenswrapper[4988]: I1123 06:52:49.302800 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-c89ht"] Nov 23 06:52:50 crc kubenswrapper[4988]: I1123 06:52:50.506857 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8693d0cd-897f-4bef-a923-783f1bf8c584" path="/var/lib/kubelet/pods/8693d0cd-897f-4bef-a923-783f1bf8c584/volumes" Nov 23 06:52:51 crc kubenswrapper[4988]: I1123 06:52:51.672714 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:52:51 crc kubenswrapper[4988]: I1123 06:52:51.672812 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:52:51 crc kubenswrapper[4988]: I1123 06:52:51.672872 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:52:51 crc kubenswrapper[4988]: I1123 06:52:51.673562 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:52:51 crc kubenswrapper[4988]: I1123 06:52:51.673647 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b" gracePeriod=600 Nov 23 06:52:52 crc kubenswrapper[4988]: I1123 06:52:52.288648 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b" exitCode=0 Nov 23 06:52:52 crc kubenswrapper[4988]: I1123 06:52:52.288731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b"} Nov 23 06:52:52 crc kubenswrapper[4988]: I1123 06:52:52.289165 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b"} Nov 23 06:52:52 crc kubenswrapper[4988]: I1123 06:52:52.289213 4988 scope.go:117] "RemoveContainer" containerID="3b442db4871122720047111c6b1bf37172c17250da105d4b8b21601e2274efb0" Nov 23 06:54:48 crc kubenswrapper[4988]: I1123 06:54:48.739780 4988 scope.go:117] "RemoveContainer" containerID="ee7ad72b0fed76027b04444715f8404d5264f1557511bd9b8f5de91c272b3e81" Nov 23 06:54:51 crc kubenswrapper[4988]: I1123 06:54:51.672309 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:54:51 crc kubenswrapper[4988]: I1123 06:54:51.672789 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:55:21 crc kubenswrapper[4988]: I1123 06:55:21.672365 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:55:21 crc kubenswrapper[4988]: I1123 06:55:21.673309 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:55:48 crc kubenswrapper[4988]: I1123 06:55:48.792504 4988 scope.go:117] "RemoveContainer" containerID="20367d5cac99723075ac1bf82e9f6c677cc5a0ed24dad6f0b43edbc95eef4f10" Nov 23 06:55:48 crc kubenswrapper[4988]: I1123 06:55:48.813716 4988 scope.go:117] "RemoveContainer" containerID="22c99e2b573eee8d2bc5187d34a5c91789209b147c75316bbcfe653e2d72f483" Nov 23 06:55:51 crc kubenswrapper[4988]: I1123 06:55:51.672602 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:55:51 crc kubenswrapper[4988]: I1123 06:55:51.673288 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:55:51 crc kubenswrapper[4988]: I1123 06:55:51.673378 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:55:51 crc kubenswrapper[4988]: I1123 06:55:51.674454 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:55:51 crc kubenswrapper[4988]: I1123 06:55:51.674579 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b" gracePeriod=600 Nov 23 06:55:52 crc kubenswrapper[4988]: I1123 06:55:52.610540 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b" exitCode=0 Nov 23 06:55:52 crc kubenswrapper[4988]: I1123 06:55:52.610633 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b"} Nov 23 06:55:52 crc kubenswrapper[4988]: I1123 06:55:52.611068 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb"} Nov 23 06:55:52 crc kubenswrapper[4988]: I1123 06:55:52.611100 4988 scope.go:117] "RemoveContainer" containerID="9cd87be3f1b515f3063cda794e5846ca1378607ebdd6150a0a83790b2e31e36b" Nov 23 06:57:48 crc kubenswrapper[4988]: I1123 06:57:48.801800 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:57:48 crc kubenswrapper[4988]: I1123 06:57:48.802727 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" podUID="38368132-21a2-414a-8b15-b5c648bb871e" containerName="route-controller-manager" containerID="cri-o://5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98" gracePeriod=30 Nov 23 06:57:48 crc kubenswrapper[4988]: I1123 06:57:48.806813 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:57:48 crc kubenswrapper[4988]: I1123 06:57:48.807082 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerName="controller-manager" containerID="cri-o://ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72" gracePeriod=30 Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.180257 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.185890 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360673 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca\") pod \"38368132-21a2-414a-8b15-b5c648bb871e\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360729 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca\") pod \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360799 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config\") pod \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360837 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89vfn\" (UniqueName: \"kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn\") pod \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360860 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles\") pod \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360897 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert\") pod \"38368132-21a2-414a-8b15-b5c648bb871e\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert\") pod \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\" (UID: \"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.360989 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md29l\" (UniqueName: \"kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l\") pod \"38368132-21a2-414a-8b15-b5c648bb871e\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.361050 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config\") pod \"38368132-21a2-414a-8b15-b5c648bb871e\" (UID: \"38368132-21a2-414a-8b15-b5c648bb871e\") " Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.362055 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config" (OuterVolumeSpecName: "config") pod "38368132-21a2-414a-8b15-b5c648bb871e" (UID: "38368132-21a2-414a-8b15-b5c648bb871e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.362087 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config" (OuterVolumeSpecName: "config") pod "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" (UID: "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.362461 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" (UID: "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.362580 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca" (OuterVolumeSpecName: "client-ca") pod "38368132-21a2-414a-8b15-b5c648bb871e" (UID: "38368132-21a2-414a-8b15-b5c648bb871e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.363039 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" (UID: "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.369156 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn" (OuterVolumeSpecName: "kube-api-access-89vfn") pod "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" (UID: "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747"). InnerVolumeSpecName "kube-api-access-89vfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.369326 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38368132-21a2-414a-8b15-b5c648bb871e" (UID: "38368132-21a2-414a-8b15-b5c648bb871e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.369479 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l" (OuterVolumeSpecName: "kube-api-access-md29l") pod "38368132-21a2-414a-8b15-b5c648bb871e" (UID: "38368132-21a2-414a-8b15-b5c648bb871e"). InnerVolumeSpecName "kube-api-access-md29l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.369817 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" (UID: "7e2bfd4a-7d4c-48ab-9985-e8d7fddde747"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.413910 4988 generic.go:334] "Generic (PLEG): container finished" podID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerID="ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72" exitCode=0 Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.413987 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" event={"ID":"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747","Type":"ContainerDied","Data":"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72"} Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.414017 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" event={"ID":"7e2bfd4a-7d4c-48ab-9985-e8d7fddde747","Type":"ContainerDied","Data":"2abfbb6e4da461396c5b4950f94d46260a531f60a3750a9cac6d6b12dd752e5e"} Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.414016 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dj2p8" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.414035 4988 scope.go:117] "RemoveContainer" containerID="ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.418937 4988 generic.go:334] "Generic (PLEG): container finished" podID="38368132-21a2-414a-8b15-b5c648bb871e" containerID="5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98" exitCode=0 Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.418972 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" event={"ID":"38368132-21a2-414a-8b15-b5c648bb871e","Type":"ContainerDied","Data":"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98"} Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.418992 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" event={"ID":"38368132-21a2-414a-8b15-b5c648bb871e","Type":"ContainerDied","Data":"2aeed15f774f349e49590e295f42d3082e5501bd5a637d8eb9fbc1bbe05601b1"} Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.419068 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.435487 4988 scope.go:117] "RemoveContainer" containerID="ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72" Nov 23 06:57:49 crc kubenswrapper[4988]: E1123 06:57:49.435938 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72\": container with ID starting with ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72 not found: ID does not exist" containerID="ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.435994 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72"} err="failed to get container status \"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72\": rpc error: code = NotFound desc = could not find container \"ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72\": container with ID starting with ef58f44084a3b62e49737a1ae275df871bdab64708904ea4fcb68194dd751e72 not found: ID does not exist" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.436019 4988 scope.go:117] "RemoveContainer" containerID="5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.444595 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.446567 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dj2p8"] Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.461156 4988 scope.go:117] "RemoveContainer" containerID="5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462035 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462068 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89vfn\" (UniqueName: \"kubernetes.io/projected/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-kube-api-access-89vfn\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462083 4988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462096 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38368132-21a2-414a-8b15-b5c648bb871e-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462108 4988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462119 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md29l\" (UniqueName: \"kubernetes.io/projected/38368132-21a2-414a-8b15-b5c648bb871e-kube-api-access-md29l\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462131 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: E1123 06:57:49.462124 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98\": container with ID starting with 5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98 not found: ID does not exist" containerID="5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462176 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98"} err="failed to get container status \"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98\": rpc error: code = NotFound desc = could not find container \"5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98\": container with ID starting with 5fc4baafbbede54e60f3ceb2afda3c3e23183535a3007ff875d92e1fdf84cc98 not found: ID does not exist" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462142 4988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38368132-21a2-414a-8b15-b5c648bb871e-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.462238 4988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.468291 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:57:49 crc kubenswrapper[4988]: I1123 06:57:49.472540 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5pxjv"] Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.504257 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38368132-21a2-414a-8b15-b5c648bb871e" path="/var/lib/kubelet/pods/38368132-21a2-414a-8b15-b5c648bb871e/volumes" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.505592 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" path="/var/lib/kubelet/pods/7e2bfd4a-7d4c-48ab-9985-e8d7fddde747/volumes" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883327 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7"] Nov 23 06:57:50 crc kubenswrapper[4988]: E1123 06:57:50.883644 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerName="controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883666 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerName="controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: E1123 06:57:50.883683 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8693d0cd-897f-4bef-a923-783f1bf8c584" containerName="registry" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883692 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8693d0cd-897f-4bef-a923-783f1bf8c584" containerName="registry" Nov 23 06:57:50 crc kubenswrapper[4988]: E1123 06:57:50.883715 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38368132-21a2-414a-8b15-b5c648bb871e" containerName="route-controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883724 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="38368132-21a2-414a-8b15-b5c648bb871e" containerName="route-controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883839 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8693d0cd-897f-4bef-a923-783f1bf8c584" containerName="registry" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883853 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="38368132-21a2-414a-8b15-b5c648bb871e" containerName="route-controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.883864 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2bfd4a-7d4c-48ab-9985-e8d7fddde747" containerName="controller-manager" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.884323 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.887470 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk"] Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.888360 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891380 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891432 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891595 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891670 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891793 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891823 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.891969 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.892003 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.892218 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.892339 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.892583 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.892851 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.900544 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk"] Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.905266 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.913613 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7"] Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982260 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b785af-693f-412c-a4f8-3a42ff8a413c-serving-cert\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982322 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-proxy-ca-bundles\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982353 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-serving-cert\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982372 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-config\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982413 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-client-ca\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982434 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4nq\" (UniqueName: \"kubernetes.io/projected/d8b785af-693f-412c-a4f8-3a42ff8a413c-kube-api-access-hb4nq\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982459 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-config\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982527 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d2v7\" (UniqueName: \"kubernetes.io/projected/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-kube-api-access-6d2v7\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:50 crc kubenswrapper[4988]: I1123 06:57:50.982556 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-client-ca\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.083996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d2v7\" (UniqueName: \"kubernetes.io/projected/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-kube-api-access-6d2v7\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084052 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-client-ca\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084099 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b785af-693f-412c-a4f8-3a42ff8a413c-serving-cert\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-proxy-ca-bundles\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084150 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-serving-cert\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084173 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-config\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084225 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-client-ca\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084246 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4nq\" (UniqueName: \"kubernetes.io/projected/d8b785af-693f-412c-a4f8-3a42ff8a413c-kube-api-access-hb4nq\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.084266 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-config\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.085394 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-client-ca\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.085730 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-config\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.085729 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-client-ca\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.085962 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-config\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.085974 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8b785af-693f-412c-a4f8-3a42ff8a413c-proxy-ca-bundles\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.088257 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b785af-693f-412c-a4f8-3a42ff8a413c-serving-cert\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.090625 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-serving-cert\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.103150 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d2v7\" (UniqueName: \"kubernetes.io/projected/f17a96a5-483f-4cc3-99d5-95a5d0829fc5-kube-api-access-6d2v7\") pod \"route-controller-manager-6fb595dfdb-qx6jk\" (UID: \"f17a96a5-483f-4cc3-99d5-95a5d0829fc5\") " pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.103732 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4nq\" (UniqueName: \"kubernetes.io/projected/d8b785af-693f-412c-a4f8-3a42ff8a413c-kube-api-access-hb4nq\") pod \"controller-manager-5dd8956ff6-f7ns7\" (UID: \"d8b785af-693f-412c-a4f8-3a42ff8a413c\") " pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.205327 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.216460 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.520285 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk"] Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.570563 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7"] Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.672558 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:57:51 crc kubenswrapper[4988]: I1123 06:57:51.672953 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.442510 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" event={"ID":"f17a96a5-483f-4cc3-99d5-95a5d0829fc5","Type":"ContainerStarted","Data":"20c44c0aab9f03cc482536fc96f52fe58310cad4ec56c838bf81bf8b362ffc34"} Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.442557 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" event={"ID":"f17a96a5-483f-4cc3-99d5-95a5d0829fc5","Type":"ContainerStarted","Data":"9c95096330abee20ccb4dbaff45c8e5225b2291834fd39a185b9b8ae275d30e4"} Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.443542 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.444929 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" event={"ID":"d8b785af-693f-412c-a4f8-3a42ff8a413c","Type":"ContainerStarted","Data":"f4a21349a711106b6138f2f9237c10ea34e3b8a2d0ca6636dab83085a34f8f9e"} Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.444958 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" event={"ID":"d8b785af-693f-412c-a4f8-3a42ff8a413c","Type":"ContainerStarted","Data":"c773f9440e2643c58edffac4b9b4ce0f192e2e4c7dc209f060468b96f74914d4"} Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.445433 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.447811 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.449043 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.472293 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb595dfdb-qx6jk" podStartSLOduration=4.4722709290000005 podStartE2EDuration="4.472270929s" podCreationTimestamp="2025-11-23 06:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:52.457296617 +0000 UTC m=+724.765809380" watchObservedRunningTime="2025-11-23 06:57:52.472270929 +0000 UTC m=+724.780783682" Nov 23 06:57:52 crc kubenswrapper[4988]: I1123 06:57:52.509956 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dd8956ff6-f7ns7" podStartSLOduration=4.509939552 podStartE2EDuration="4.509939552s" podCreationTimestamp="2025-11-23 06:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:52.507249447 +0000 UTC m=+724.815762220" watchObservedRunningTime="2025-11-23 06:57:52.509939552 +0000 UTC m=+724.818452315" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.149457 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxqnz"] Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150617 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-controller" containerID="cri-o://9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150742 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="sbdb" containerID="cri-o://3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150828 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="northd" containerID="cri-o://fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150743 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150861 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-node" containerID="cri-o://0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150763 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-acl-logging" containerID="cri-o://544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.150686 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="nbdb" containerID="cri-o://19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.201942 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" containerID="cri-o://a2a5dbc04610d0a4a8e6e80a2ce783434d59d7f84da7b04c4a1c7fba5e900935" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.240321 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.242402 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.248766 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.248908 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.250498 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.250590 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="sbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.250640 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.250679 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="nbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.488847 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4p82c_0dde7218-bd4b-4585-b049-cb8db163fdac/kube-multus/1.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.489786 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4p82c_0dde7218-bd4b-4585-b049-cb8db163fdac/kube-multus/0.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.489860 4988 generic.go:334] "Generic (PLEG): container finished" podID="0dde7218-bd4b-4585-b049-cb8db163fdac" containerID="ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf" exitCode=2 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.489924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerDied","Data":"ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.490022 4988 scope.go:117] "RemoveContainer" containerID="26494c61d4faf5161576a0020d1df57e31ee386f0bab5f89af443f7c8a3adffd" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.490644 4988 scope.go:117] "RemoveContainer" containerID="ef332d9006399b2b79a4008e5d899c1989308798f7a409771877da2c949dc8bf" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.492703 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/3.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.501100 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-acl-logging/0.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.502421 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-controller/0.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503030 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="a2a5dbc04610d0a4a8e6e80a2ce783434d59d7f84da7b04c4a1c7fba5e900935" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503060 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503080 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503095 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503108 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503124 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503138 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31" exitCode=143 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503154 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerID="9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842" exitCode=143 Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503178 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"a2a5dbc04610d0a4a8e6e80a2ce783434d59d7f84da7b04c4a1c7fba5e900935"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503260 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503284 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503305 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503324 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503341 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503359 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.503378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842"} Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.532052 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovnkube-controller/3.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.535448 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-acl-logging/0.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.537097 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-controller/0.log" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.537536 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.547010 4988 scope.go:117] "RemoveContainer" containerID="2a578e1b08780ede6e3747bba8f7d85fa1b9ecbf07edcb395f3886ebe7c266c7" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.594655 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qlfbb"] Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.594958 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.594980 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.594997 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="northd" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595007 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="northd" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595018 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kubecfg-setup" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595025 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kubecfg-setup" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595035 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595042 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595052 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="sbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595058 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="sbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595066 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-acl-logging" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595073 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-acl-logging" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595082 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-node" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595088 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-node" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595098 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="nbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595105 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="nbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595115 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595124 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595135 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595142 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595150 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595156 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595164 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595170 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595297 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="nbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595309 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595317 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-acl-logging" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595325 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="northd" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595332 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595340 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-node" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595346 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595354 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595363 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="sbdb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595372 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovn-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595379 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: E1123 06:57:59.595472 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595479 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.595573 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" containerName="ovnkube-controller" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.597298 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704127 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704189 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704235 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704262 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704304 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704339 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704362 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704386 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704408 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704445 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704470 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704501 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704539 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704595 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704624 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704645 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704680 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704708 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704742 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704767 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hdwf\" (UniqueName: \"kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf\") pod \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\" (UID: \"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931\") " Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704915 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704946 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-netd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704970 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-kubelet\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.704993 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705018 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-systemd-units\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705041 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-etc-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705064 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-slash\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705096 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705121 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-node-log\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705154 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-env-overrides\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705181 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-bin\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705284 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-script-lib\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705275 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash" (OuterVolumeSpecName: "host-slash") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705309 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705346 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705350 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705398 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705396 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705379 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705437 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket" (OuterVolumeSpecName: "log-socket") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705459 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705495 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706010 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706024 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706058 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705313 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-config\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706179 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovn-node-metrics-cert\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.705377 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log" (OuterVolumeSpecName: "node-log") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706089 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706124 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706298 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-var-lib-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706337 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhsm\" (UniqueName: \"kubernetes.io/projected/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-kube-api-access-4jhsm\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706381 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-netns\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706413 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706431 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-systemd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706531 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-ovn\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706618 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-log-socket\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706848 4988 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706862 4988 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706875 4988 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706884 4988 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706893 4988 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706905 4988 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706914 4988 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706923 4988 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706932 4988 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706943 4988 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706952 4988 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706960 4988 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706969 4988 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706978 4988 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706986 4988 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.706995 4988 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.707005 4988 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.712727 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.713075 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf" (OuterVolumeSpecName: "kube-api-access-2hdwf") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "kube-api-access-2hdwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.723555 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" (UID: "cb5bfadf-3097-45a0-a0d8-2b75e4c1e931"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808691 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808758 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-node-log\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-env-overrides\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808807 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808822 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-bin\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-bin\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808865 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-script-lib\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808906 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-config\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808939 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovn-node-metrics-cert\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808972 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-var-lib-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.808995 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhsm\" (UniqueName: \"kubernetes.io/projected/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-kube-api-access-4jhsm\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809020 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-netns\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809047 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-systemd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809070 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-ovn\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809090 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-log-socket\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809114 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809142 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-netd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809161 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-kubelet\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809184 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809255 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-systemd-units\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809280 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-etc-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809303 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-slash\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809355 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hdwf\" (UniqueName: \"kubernetes.io/projected/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-kube-api-access-2hdwf\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809383 4988 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809398 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-slash\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809467 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-netns\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809505 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-script-lib\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809510 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-systemd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809554 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-ovn\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809599 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-log-socket\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809631 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809665 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-cni-netd\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809692 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-host-kubelet\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-run-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809749 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-systemd-units\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809778 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-etc-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809810 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-var-lib-openvswitch\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809838 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-node-log\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.809911 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovnkube-config\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.810334 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-env-overrides\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.814765 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-ovn-node-metrics-cert\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.837440 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhsm\" (UniqueName: \"kubernetes.io/projected/a89b0650-17f4-49fc-8cb0-c47e9b2387e6-kube-api-access-4jhsm\") pod \"ovnkube-node-qlfbb\" (UID: \"a89b0650-17f4-49fc-8cb0-c47e9b2387e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: I1123 06:57:59.925670 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:57:59 crc kubenswrapper[4988]: W1123 06:57:59.948995 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda89b0650_17f4_49fc_8cb0_c47e9b2387e6.slice/crio-110e8458ca295a7d8713268c6324b4b3374fb3e350642de47fe17b3e2d0943dc WatchSource:0}: Error finding container 110e8458ca295a7d8713268c6324b4b3374fb3e350642de47fe17b3e2d0943dc: Status 404 returned error can't find the container with id 110e8458ca295a7d8713268c6324b4b3374fb3e350642de47fe17b3e2d0943dc Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.511458 4988 generic.go:334] "Generic (PLEG): container finished" podID="a89b0650-17f4-49fc-8cb0-c47e9b2387e6" containerID="22f2edb4065c53895da7161937eedb2f0d495d0733c216916799266046235471" exitCode=0 Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.511562 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerDied","Data":"22f2edb4065c53895da7161937eedb2f0d495d0733c216916799266046235471"} Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.511808 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"110e8458ca295a7d8713268c6324b4b3374fb3e350642de47fe17b3e2d0943dc"} Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.515607 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4p82c_0dde7218-bd4b-4585-b049-cb8db163fdac/kube-multus/1.log" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.515707 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4p82c" event={"ID":"0dde7218-bd4b-4585-b049-cb8db163fdac","Type":"ContainerStarted","Data":"d6a1723baeba19ba8009e0ddd1975697fb1449f407e90b7f106500b0d7bae488"} Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.554965 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-acl-logging/0.log" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.556679 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxqnz_cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/ovn-controller/0.log" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.557720 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" event={"ID":"cb5bfadf-3097-45a0-a0d8-2b75e4c1e931","Type":"ContainerDied","Data":"1c2f447fceae5d2e56e4f2800c4665e6888b7f2b9fd30d462b5b18dcf20c847d"} Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.557801 4988 scope.go:117] "RemoveContainer" containerID="a2a5dbc04610d0a4a8e6e80a2ce783434d59d7f84da7b04c4a1c7fba5e900935" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.557943 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxqnz" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.608860 4988 scope.go:117] "RemoveContainer" containerID="3fce6142f392d680d6a9f7e7e93afbc2c2df8677db2d3a8a935e6069f7dbf4f3" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.657553 4988 scope.go:117] "RemoveContainer" containerID="19320733ea5f186755f942fb73c65a0c9922d207b2128622c04aedc5eecc7472" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.664743 4988 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.670932 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxqnz"] Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.679139 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxqnz"] Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.683711 4988 scope.go:117] "RemoveContainer" containerID="fef950a279d0ecc19be20102fb886f5854801eaff0f0fd9864d11129442fe63e" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.712546 4988 scope.go:117] "RemoveContainer" containerID="b914f352b5b415be3e6f74b87459f35bf877ea0d8c9d425761182a0a5c23f892" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.743613 4988 scope.go:117] "RemoveContainer" containerID="0c1c972f84e6949138541037b5cad0e831f19427d09499deeac5e260510e8a48" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.765200 4988 scope.go:117] "RemoveContainer" containerID="544685c02e74e7e48aaa010e4a400efe060f1fa987996aaabec1ab7e28dc7a31" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.796163 4988 scope.go:117] "RemoveContainer" containerID="9f12d6f822cae082980385fe687158c3bb7d0514d789c269b197e8760313e842" Nov 23 06:58:00 crc kubenswrapper[4988]: I1123 06:58:00.822318 4988 scope.go:117] "RemoveContainer" containerID="0cec90e35e3d0a49bc232c69099c31efde1ab2b03149fb997b39622cc711a4de" Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574030 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"585001ed2e4a219498d3b3ebdeb6af7b85a079a0c6a84f8414de64ba96904fde"} Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574491 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"a260c7dccff6357b0b5f6a6fdd37c118935a6bbcdf53b6ee6f46a70a26388e19"} Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574508 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"af0a61a9aab14c4baa11a630ad7a8fd2211e4e009b96cebebb73af54fb39ed82"} Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574522 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"a65f7af8919c5c19c4dd15cd6a0dc88fdbd73068381a65a80f6a11dce514de1e"} Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574536 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"2af0ed636c0f32be106e62889e8374b6244f48857fdee78af86e12cb1efc147b"} Nov 23 06:58:01 crc kubenswrapper[4988]: I1123 06:58:01.574551 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"005c3a4a434c8ec53dd79681d44eec00e0146fba416241603d0e51ac44cef1f6"} Nov 23 06:58:02 crc kubenswrapper[4988]: I1123 06:58:02.505186 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5bfadf-3097-45a0-a0d8-2b75e4c1e931" path="/var/lib/kubelet/pods/cb5bfadf-3097-45a0-a0d8-2b75e4c1e931/volumes" Nov 23 06:58:04 crc kubenswrapper[4988]: I1123 06:58:04.599402 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"00dfa8da1d657f92cb1e14c00a2050193afe77f462129816c1f6a3307cf89f45"} Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.621912 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" event={"ID":"a89b0650-17f4-49fc-8cb0-c47e9b2387e6","Type":"ContainerStarted","Data":"7f1386452a22cb07d1f5e432ae39219a1508b7bf4b078e89fdd9e28b22e3b6e4"} Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.622724 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.622812 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.622878 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.656368 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.658102 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:07 crc kubenswrapper[4988]: I1123 06:58:07.665543 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" podStartSLOduration=8.665531517 podStartE2EDuration="8.665531517s" podCreationTimestamp="2025-11-23 06:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:07.662943485 +0000 UTC m=+739.971456258" watchObservedRunningTime="2025-11-23 06:58:07.665531517 +0000 UTC m=+739.974044280" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.002376 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-fl6wx"] Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.003158 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.005519 4988 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6s8d2" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.005593 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.005480 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.006330 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.016375 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fl6wx"] Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.152813 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.153080 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.153245 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfzt\" (UniqueName: \"kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.254496 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.254558 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfzt\" (UniqueName: \"kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.254618 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.254968 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.256480 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.280918 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfzt\" (UniqueName: \"kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt\") pod \"crc-storage-crc-fl6wx\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.325383 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.357021 4988 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(063f134c449525ebf281ef4faaff2f075cd4d6b68138f8c7dd582eec85082392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.357121 4988 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(063f134c449525ebf281ef4faaff2f075cd4d6b68138f8c7dd582eec85082392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.357157 4988 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(063f134c449525ebf281ef4faaff2f075cd4d6b68138f8c7dd582eec85082392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.357252 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-fl6wx_crc-storage(7f82c43d-4d02-41dd-b05b-51735da4e160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-fl6wx_crc-storage(7f82c43d-4d02-41dd-b05b-51735da4e160)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(063f134c449525ebf281ef4faaff2f075cd4d6b68138f8c7dd582eec85082392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-fl6wx" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.629508 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: I1123 06:58:08.630869 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.669169 4988 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(edabf870e66e82fac45b5719c88a4d861d38079c96d80e936665d0166490e415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.669252 4988 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(edabf870e66e82fac45b5719c88a4d861d38079c96d80e936665d0166490e415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.669271 4988 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(edabf870e66e82fac45b5719c88a4d861d38079c96d80e936665d0166490e415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:08 crc kubenswrapper[4988]: E1123 06:58:08.669328 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-fl6wx_crc-storage(7f82c43d-4d02-41dd-b05b-51735da4e160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-fl6wx_crc-storage(7f82c43d-4d02-41dd-b05b-51735da4e160)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-fl6wx_crc-storage_7f82c43d-4d02-41dd-b05b-51735da4e160_0(edabf870e66e82fac45b5719c88a4d861d38079c96d80e936665d0166490e415): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-fl6wx" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" Nov 23 06:58:19 crc kubenswrapper[4988]: I1123 06:58:19.496285 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:19 crc kubenswrapper[4988]: I1123 06:58:19.498116 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:19 crc kubenswrapper[4988]: I1123 06:58:19.938708 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-fl6wx"] Nov 23 06:58:19 crc kubenswrapper[4988]: I1123 06:58:19.966277 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:58:20 crc kubenswrapper[4988]: I1123 06:58:20.714120 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fl6wx" event={"ID":"7f82c43d-4d02-41dd-b05b-51735da4e160","Type":"ContainerStarted","Data":"8ddfc5aa02015a0a1802ab10c753148097cd9748c3895ec2d3711e94a10f8cba"} Nov 23 06:58:21 crc kubenswrapper[4988]: I1123 06:58:21.672378 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:58:21 crc kubenswrapper[4988]: I1123 06:58:21.672481 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:58:22 crc kubenswrapper[4988]: I1123 06:58:22.731469 4988 generic.go:334] "Generic (PLEG): container finished" podID="7f82c43d-4d02-41dd-b05b-51735da4e160" containerID="62484c6fbe8127266f8c378eb9ca0d9110f139143cb24b1ec18fce16679ad959" exitCode=0 Nov 23 06:58:22 crc kubenswrapper[4988]: I1123 06:58:22.731552 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fl6wx" event={"ID":"7f82c43d-4d02-41dd-b05b-51735da4e160","Type":"ContainerDied","Data":"62484c6fbe8127266f8c378eb9ca0d9110f139143cb24b1ec18fce16679ad959"} Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.146238 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.196854 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt\") pod \"7f82c43d-4d02-41dd-b05b-51735da4e160\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.196954 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdfzt\" (UniqueName: \"kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt\") pod \"7f82c43d-4d02-41dd-b05b-51735da4e160\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.197015 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage\") pod \"7f82c43d-4d02-41dd-b05b-51735da4e160\" (UID: \"7f82c43d-4d02-41dd-b05b-51735da4e160\") " Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.197052 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "7f82c43d-4d02-41dd-b05b-51735da4e160" (UID: "7f82c43d-4d02-41dd-b05b-51735da4e160"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.197332 4988 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/7f82c43d-4d02-41dd-b05b-51735da4e160-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.204524 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt" (OuterVolumeSpecName: "kube-api-access-qdfzt") pod "7f82c43d-4d02-41dd-b05b-51735da4e160" (UID: "7f82c43d-4d02-41dd-b05b-51735da4e160"). InnerVolumeSpecName "kube-api-access-qdfzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.210820 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "7f82c43d-4d02-41dd-b05b-51735da4e160" (UID: "7f82c43d-4d02-41dd-b05b-51735da4e160"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.298585 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdfzt\" (UniqueName: \"kubernetes.io/projected/7f82c43d-4d02-41dd-b05b-51735da4e160-kube-api-access-qdfzt\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.298635 4988 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/7f82c43d-4d02-41dd-b05b-51735da4e160-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.748912 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-fl6wx" event={"ID":"7f82c43d-4d02-41dd-b05b-51735da4e160","Type":"ContainerDied","Data":"8ddfc5aa02015a0a1802ab10c753148097cd9748c3895ec2d3711e94a10f8cba"} Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.749013 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ddfc5aa02015a0a1802ab10c753148097cd9748c3895ec2d3711e94a10f8cba" Nov 23 06:58:24 crc kubenswrapper[4988]: I1123 06:58:24.748968 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-fl6wx" Nov 23 06:58:29 crc kubenswrapper[4988]: I1123 06:58:29.966637 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qlfbb" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.813478 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg"] Nov 23 06:58:32 crc kubenswrapper[4988]: E1123 06:58:32.813732 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" containerName="storage" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.813749 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" containerName="storage" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.813849 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" containerName="storage" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.814698 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.816979 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.826798 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg"] Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.922028 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxfs\" (UniqueName: \"kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.922327 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:32 crc kubenswrapper[4988]: I1123 06:58:32.922434 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.023875 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxfs\" (UniqueName: \"kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.024264 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.024595 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.025038 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.025029 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.057721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxfs\" (UniqueName: \"kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.133172 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.351355 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg"] Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.807389 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerStarted","Data":"faa1783fbc541a172206fe748c9e6b74ff675cb187b6996413077f99d7d2e1c5"} Nov 23 06:58:33 crc kubenswrapper[4988]: I1123 06:58:33.808153 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerStarted","Data":"52a209db283e997e228bc3d16c4eb09433e938298b2d622b04fea74162e16a57"} Nov 23 06:58:34 crc kubenswrapper[4988]: I1123 06:58:34.818008 4988 generic.go:334] "Generic (PLEG): container finished" podID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerID="faa1783fbc541a172206fe748c9e6b74ff675cb187b6996413077f99d7d2e1c5" exitCode=0 Nov 23 06:58:34 crc kubenswrapper[4988]: I1123 06:58:34.818073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerDied","Data":"faa1783fbc541a172206fe748c9e6b74ff675cb187b6996413077f99d7d2e1c5"} Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.173651 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.175393 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.186212 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.256682 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.256758 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9hrl\" (UniqueName: \"kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.256889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.358146 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.358244 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9hrl\" (UniqueName: \"kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.358290 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.359053 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.359046 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.378855 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9hrl\" (UniqueName: \"kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl\") pod \"redhat-operators-jfwxm\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.527840 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.764635 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:35 crc kubenswrapper[4988]: W1123 06:58:35.772882 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d47121c_9d62_4fd5_bf28_a30c9f716116.slice/crio-eada038a5bd068b38fdb2d46809f6ad9eaf7402afa81fe51b1ec79ea99625a0d WatchSource:0}: Error finding container eada038a5bd068b38fdb2d46809f6ad9eaf7402afa81fe51b1ec79ea99625a0d: Status 404 returned error can't find the container with id eada038a5bd068b38fdb2d46809f6ad9eaf7402afa81fe51b1ec79ea99625a0d Nov 23 06:58:35 crc kubenswrapper[4988]: I1123 06:58:35.829489 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerStarted","Data":"eada038a5bd068b38fdb2d46809f6ad9eaf7402afa81fe51b1ec79ea99625a0d"} Nov 23 06:58:36 crc kubenswrapper[4988]: I1123 06:58:36.838905 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerID="ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99" exitCode=0 Nov 23 06:58:36 crc kubenswrapper[4988]: I1123 06:58:36.839562 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerDied","Data":"ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99"} Nov 23 06:58:36 crc kubenswrapper[4988]: I1123 06:58:36.845511 4988 generic.go:334] "Generic (PLEG): container finished" podID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerID="4048fbcfa022a697efdebb5fa4715717068694a7bf653adfc73b6fea87b4dfbd" exitCode=0 Nov 23 06:58:36 crc kubenswrapper[4988]: I1123 06:58:36.845557 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerDied","Data":"4048fbcfa022a697efdebb5fa4715717068694a7bf653adfc73b6fea87b4dfbd"} Nov 23 06:58:37 crc kubenswrapper[4988]: I1123 06:58:37.858042 4988 generic.go:334] "Generic (PLEG): container finished" podID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerID="8fc11fd221320fb326ea5168fa5137394a64b0bd54751260cd022abf7bcb4d76" exitCode=0 Nov 23 06:58:37 crc kubenswrapper[4988]: I1123 06:58:37.858143 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerDied","Data":"8fc11fd221320fb326ea5168fa5137394a64b0bd54751260cd022abf7bcb4d76"} Nov 23 06:58:38 crc kubenswrapper[4988]: I1123 06:58:38.868507 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerStarted","Data":"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721"} Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.152021 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.203879 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util\") pod \"fdd62a47-453f-42a2-a73a-5dba3633b5be\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.203965 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle\") pod \"fdd62a47-453f-42a2-a73a-5dba3633b5be\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.204061 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgxfs\" (UniqueName: \"kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs\") pod \"fdd62a47-453f-42a2-a73a-5dba3633b5be\" (UID: \"fdd62a47-453f-42a2-a73a-5dba3633b5be\") " Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.204584 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle" (OuterVolumeSpecName: "bundle") pod "fdd62a47-453f-42a2-a73a-5dba3633b5be" (UID: "fdd62a47-453f-42a2-a73a-5dba3633b5be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.209844 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs" (OuterVolumeSpecName: "kube-api-access-jgxfs") pod "fdd62a47-453f-42a2-a73a-5dba3633b5be" (UID: "fdd62a47-453f-42a2-a73a-5dba3633b5be"). InnerVolumeSpecName "kube-api-access-jgxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.305371 4988 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.305416 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgxfs\" (UniqueName: \"kubernetes.io/projected/fdd62a47-453f-42a2-a73a-5dba3633b5be-kube-api-access-jgxfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.879494 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" event={"ID":"fdd62a47-453f-42a2-a73a-5dba3633b5be","Type":"ContainerDied","Data":"52a209db283e997e228bc3d16c4eb09433e938298b2d622b04fea74162e16a57"} Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.879586 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a209db283e997e228bc3d16c4eb09433e938298b2d622b04fea74162e16a57" Nov 23 06:58:39 crc kubenswrapper[4988]: I1123 06:58:39.879516 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg" Nov 23 06:58:40 crc kubenswrapper[4988]: I1123 06:58:40.017473 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util" (OuterVolumeSpecName: "util") pod "fdd62a47-453f-42a2-a73a-5dba3633b5be" (UID: "fdd62a47-453f-42a2-a73a-5dba3633b5be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:40 crc kubenswrapper[4988]: I1123 06:58:40.116623 4988 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdd62a47-453f-42a2-a73a-5dba3633b5be-util\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:40 crc kubenswrapper[4988]: I1123 06:58:40.888488 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerID="f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721" exitCode=0 Nov 23 06:58:40 crc kubenswrapper[4988]: I1123 06:58:40.888580 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerDied","Data":"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721"} Nov 23 06:58:42 crc kubenswrapper[4988]: I1123 06:58:42.905870 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerStarted","Data":"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8"} Nov 23 06:58:42 crc kubenswrapper[4988]: I1123 06:58:42.925337 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jfwxm" podStartSLOduration=2.880774395 podStartE2EDuration="7.925313914s" podCreationTimestamp="2025-11-23 06:58:35 +0000 UTC" firstStartedPulling="2025-11-23 06:58:36.841367747 +0000 UTC m=+769.149880520" lastFinishedPulling="2025-11-23 06:58:41.885907246 +0000 UTC m=+774.194420039" observedRunningTime="2025-11-23 06:58:42.922578908 +0000 UTC m=+775.231091691" watchObservedRunningTime="2025-11-23 06:58:42.925313914 +0000 UTC m=+775.233826687" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.067331 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwg45"] Nov 23 06:58:43 crc kubenswrapper[4988]: E1123 06:58:43.067583 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="pull" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.067597 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="pull" Nov 23 06:58:43 crc kubenswrapper[4988]: E1123 06:58:43.067610 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="extract" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.067617 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="extract" Nov 23 06:58:43 crc kubenswrapper[4988]: E1123 06:58:43.067630 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="util" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.067639 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="util" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.067744 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd62a47-453f-42a2-a73a-5dba3633b5be" containerName="extract" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.068209 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.071707 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.072456 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tvzjw" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.074409 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.094689 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwg45"] Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.156423 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vbh4\" (UniqueName: \"kubernetes.io/projected/c385a6b1-1890-43aa-9929-3a4a4fcb399c-kube-api-access-9vbh4\") pod \"nmstate-operator-557fdffb88-nwg45\" (UID: \"c385a6b1-1890-43aa-9929-3a4a4fcb399c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.257299 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vbh4\" (UniqueName: \"kubernetes.io/projected/c385a6b1-1890-43aa-9929-3a4a4fcb399c-kube-api-access-9vbh4\") pod \"nmstate-operator-557fdffb88-nwg45\" (UID: \"c385a6b1-1890-43aa-9929-3a4a4fcb399c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.280903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vbh4\" (UniqueName: \"kubernetes.io/projected/c385a6b1-1890-43aa-9929-3a4a4fcb399c-kube-api-access-9vbh4\") pod \"nmstate-operator-557fdffb88-nwg45\" (UID: \"c385a6b1-1890-43aa-9929-3a4a4fcb399c\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.385502 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.865860 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-nwg45"] Nov 23 06:58:43 crc kubenswrapper[4988]: I1123 06:58:43.911563 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" event={"ID":"c385a6b1-1890-43aa-9929-3a4a4fcb399c","Type":"ContainerStarted","Data":"cae60abd022d52f2f26130415d299034a691cec44dbccbe5596a50a262f68a54"} Nov 23 06:58:45 crc kubenswrapper[4988]: I1123 06:58:45.528874 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:45 crc kubenswrapper[4988]: I1123 06:58:45.529380 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:46 crc kubenswrapper[4988]: I1123 06:58:46.581621 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jfwxm" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="registry-server" probeResult="failure" output=< Nov 23 06:58:46 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 06:58:46 crc kubenswrapper[4988]: > Nov 23 06:58:48 crc kubenswrapper[4988]: I1123 06:58:48.948160 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" event={"ID":"c385a6b1-1890-43aa-9929-3a4a4fcb399c","Type":"ContainerStarted","Data":"470dee5ccc3d53d86500f52576d961941f66fc944e41533ede3c819b199cbaa6"} Nov 23 06:58:48 crc kubenswrapper[4988]: I1123 06:58:48.971295 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-nwg45" podStartSLOduration=1.5246315620000002 podStartE2EDuration="5.97126335s" podCreationTimestamp="2025-11-23 06:58:43 +0000 UTC" firstStartedPulling="2025-11-23 06:58:43.873761937 +0000 UTC m=+776.182274700" lastFinishedPulling="2025-11-23 06:58:48.320393715 +0000 UTC m=+780.628906488" observedRunningTime="2025-11-23 06:58:48.966248898 +0000 UTC m=+781.274761691" watchObservedRunningTime="2025-11-23 06:58:48.97126335 +0000 UTC m=+781.279776123" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.672560 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.672665 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.672743 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.673724 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.673846 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb" gracePeriod=600 Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.870536 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.872183 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.885264 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.966098 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb" exitCode=0 Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.966158 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb"} Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.966226 4988 scope.go:117] "RemoveContainer" containerID="b759bb52933ee97cfc0f6a0286089efb90b906890824c971add7c9ea19f50f2b" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.977619 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.977700 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2pkk\" (UniqueName: \"kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:51 crc kubenswrapper[4988]: I1123 06:58:51.977744 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.078856 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.078981 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.079060 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2pkk\" (UniqueName: \"kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.079457 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.079517 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.105962 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2pkk\" (UniqueName: \"kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk\") pod \"redhat-marketplace-nhpxf\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.215141 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.451334 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:58:52 crc kubenswrapper[4988]: W1123 06:58:52.462145 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e4c74bc_c32c_4114_b2e7_14fe47b0c0a4.slice/crio-833bae0da33f9db2aaf111229299b7ce36e3eb25586f0a199402b78dc75463a2 WatchSource:0}: Error finding container 833bae0da33f9db2aaf111229299b7ce36e3eb25586f0a199402b78dc75463a2: Status 404 returned error can't find the container with id 833bae0da33f9db2aaf111229299b7ce36e3eb25586f0a199402b78dc75463a2 Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.973554 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerID="4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4" exitCode=0 Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.973607 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerDied","Data":"4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4"} Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.973864 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerStarted","Data":"833bae0da33f9db2aaf111229299b7ce36e3eb25586f0a199402b78dc75463a2"} Nov 23 06:58:52 crc kubenswrapper[4988]: I1123 06:58:52.977255 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf"} Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.059382 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.060776 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.063005 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-zhrtz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.064392 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.065243 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.066988 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.077727 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.089056 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2vt9j"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.090067 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.127896 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.191803 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-ovs-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.191876 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqbkt\" (UniqueName: \"kubernetes.io/projected/026e45ed-f994-494d-ae7a-32e215f95cd2-kube-api-access-nqbkt\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.191907 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t27p\" (UniqueName: \"kubernetes.io/projected/69fa7667-f139-4534-b39c-ac6c41f078c9-kube-api-access-7t27p\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.191944 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-dbus-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.191980 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshcr\" (UniqueName: \"kubernetes.io/projected/fe77a552-6fcd-45d9-9d61-e242fdb0c4a2-kube-api-access-pshcr\") pod \"nmstate-metrics-5dcf9c57c5-vwth7\" (UID: \"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.192010 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.192031 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-nmstate-lock\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.200378 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.200977 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.202808 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.203014 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dwrwm" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.209742 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.211666 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293551 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-ovs-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293601 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deca23f5-716a-4d4e-88ae-dd9315c77268-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293624 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqbkt\" (UniqueName: \"kubernetes.io/projected/026e45ed-f994-494d-ae7a-32e215f95cd2-kube-api-access-nqbkt\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293642 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t27p\" (UniqueName: \"kubernetes.io/projected/69fa7667-f139-4534-b39c-ac6c41f078c9-kube-api-access-7t27p\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293665 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnbw4\" (UniqueName: \"kubernetes.io/projected/deca23f5-716a-4d4e-88ae-dd9315c77268-kube-api-access-fnbw4\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293684 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-ovs-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293687 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-dbus-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293764 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/deca23f5-716a-4d4e-88ae-dd9315c77268-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293792 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshcr\" (UniqueName: \"kubernetes.io/projected/fe77a552-6fcd-45d9-9d61-e242fdb0c4a2-kube-api-access-pshcr\") pod \"nmstate-metrics-5dcf9c57c5-vwth7\" (UID: \"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293825 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293854 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-nmstate-lock\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293927 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-nmstate-lock\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.293948 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/026e45ed-f994-494d-ae7a-32e215f95cd2-dbus-socket\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: E1123 06:58:53.294325 4988 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 23 06:58:53 crc kubenswrapper[4988]: E1123 06:58:53.294444 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair podName:69fa7667-f139-4534-b39c-ac6c41f078c9 nodeName:}" failed. No retries permitted until 2025-11-23 06:58:53.794415809 +0000 UTC m=+786.102928632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair") pod "nmstate-webhook-6b89b748d8-ldrdz" (UID: "69fa7667-f139-4534-b39c-ac6c41f078c9") : secret "openshift-nmstate-webhook" not found Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.317973 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t27p\" (UniqueName: \"kubernetes.io/projected/69fa7667-f139-4534-b39c-ac6c41f078c9-kube-api-access-7t27p\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.324840 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqbkt\" (UniqueName: \"kubernetes.io/projected/026e45ed-f994-494d-ae7a-32e215f95cd2-kube-api-access-nqbkt\") pod \"nmstate-handler-2vt9j\" (UID: \"026e45ed-f994-494d-ae7a-32e215f95cd2\") " pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.324996 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshcr\" (UniqueName: \"kubernetes.io/projected/fe77a552-6fcd-45d9-9d61-e242fdb0c4a2-kube-api-access-pshcr\") pod \"nmstate-metrics-5dcf9c57c5-vwth7\" (UID: \"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.377557 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.396090 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deca23f5-716a-4d4e-88ae-dd9315c77268-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.396170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnbw4\" (UniqueName: \"kubernetes.io/projected/deca23f5-716a-4d4e-88ae-dd9315c77268-kube-api-access-fnbw4\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.396237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/deca23f5-716a-4d4e-88ae-dd9315c77268-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.396946 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deca23f5-716a-4d4e-88ae-dd9315c77268-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.408109 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.408901 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/deca23f5-716a-4d4e-88ae-dd9315c77268-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.425712 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d7dddfc8-58mn5"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.426626 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.428962 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnbw4\" (UniqueName: \"kubernetes.io/projected/deca23f5-716a-4d4e-88ae-dd9315c77268-kube-api-access-fnbw4\") pod \"nmstate-console-plugin-5874bd7bc5-wlvj5\" (UID: \"deca23f5-716a-4d4e-88ae-dd9315c77268\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.441075 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d7dddfc8-58mn5"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.524593 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.597963 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-oauth-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598006 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-oauth-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598033 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-trusted-ca-bundle\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598055 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598079 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vws\" (UniqueName: \"kubernetes.io/projected/1adfa2b4-b292-42e1-b86c-1a1b10af4766-kube-api-access-x8vws\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598096 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-service-ca\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.598118 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.699878 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.699934 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vws\" (UniqueName: \"kubernetes.io/projected/1adfa2b4-b292-42e1-b86c-1a1b10af4766-kube-api-access-x8vws\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.699955 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-service-ca\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.699992 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.700066 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-oauth-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.700088 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-oauth-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.700107 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-trusted-ca-bundle\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.701374 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-trusted-ca-bundle\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.701834 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.702480 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-oauth-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.704045 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1adfa2b4-b292-42e1-b86c-1a1b10af4766-service-ca\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.707663 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-serving-cert\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.707708 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1adfa2b4-b292-42e1-b86c-1a1b10af4766-console-oauth-config\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.718648 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vws\" (UniqueName: \"kubernetes.io/projected/1adfa2b4-b292-42e1-b86c-1a1b10af4766-kube-api-access-x8vws\") pod \"console-64d7dddfc8-58mn5\" (UID: \"1adfa2b4-b292-42e1-b86c-1a1b10af4766\") " pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.758627 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.801383 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.802660 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.807075 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/69fa7667-f139-4534-b39c-ac6c41f078c9-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-ldrdz\" (UID: \"69fa7667-f139-4534-b39c-ac6c41f078c9\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: W1123 06:58:53.820379 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe77a552_6fcd_45d9_9d61_e242fdb0c4a2.slice/crio-34c70a6cb059c03d336a9a7788988847afea069d9fca55da2d605eba03e6feb1 WatchSource:0}: Error finding container 34c70a6cb059c03d336a9a7788988847afea069d9fca55da2d605eba03e6feb1: Status 404 returned error can't find the container with id 34c70a6cb059c03d336a9a7788988847afea069d9fca55da2d605eba03e6feb1 Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.928156 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5"] Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.984398 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.984883 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" event={"ID":"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2","Type":"ContainerStarted","Data":"34c70a6cb059c03d336a9a7788988847afea069d9fca55da2d605eba03e6feb1"} Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.988846 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2vt9j" event={"ID":"026e45ed-f994-494d-ae7a-32e215f95cd2","Type":"ContainerStarted","Data":"1a7c24d710540a3353de7c70b8990a9c3acb9e4eee10c3bbdfe3bb281f05f24f"} Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.992292 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerID="be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2" exitCode=0 Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.992331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerDied","Data":"be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2"} Nov 23 06:58:53 crc kubenswrapper[4988]: I1123 06:58:53.995159 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" event={"ID":"deca23f5-716a-4d4e-88ae-dd9315c77268","Type":"ContainerStarted","Data":"f35b5e8b257878c6f495275335fa366e69b80b1e5a7579cfd416c171bddd3e0d"} Nov 23 06:58:54 crc kubenswrapper[4988]: I1123 06:58:54.021260 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d7dddfc8-58mn5"] Nov 23 06:58:54 crc kubenswrapper[4988]: I1123 06:58:54.445747 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz"] Nov 23 06:58:54 crc kubenswrapper[4988]: W1123 06:58:54.453949 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fa7667_f139_4534_b39c_ac6c41f078c9.slice/crio-9ef60308f927e81ad6a113ce477667fb9c4f04f45c604d0d68d9507e167d2f44 WatchSource:0}: Error finding container 9ef60308f927e81ad6a113ce477667fb9c4f04f45c604d0d68d9507e167d2f44: Status 404 returned error can't find the container with id 9ef60308f927e81ad6a113ce477667fb9c4f04f45c604d0d68d9507e167d2f44 Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.003664 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" event={"ID":"69fa7667-f139-4534-b39c-ac6c41f078c9","Type":"ContainerStarted","Data":"9ef60308f927e81ad6a113ce477667fb9c4f04f45c604d0d68d9507e167d2f44"} Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.005129 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d7dddfc8-58mn5" event={"ID":"1adfa2b4-b292-42e1-b86c-1a1b10af4766","Type":"ContainerStarted","Data":"05d4bebe6dfaef4c4188d231a1e5e2ca0d4281a5e89eebba6adfdb63231a62f3"} Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.005150 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d7dddfc8-58mn5" event={"ID":"1adfa2b4-b292-42e1-b86c-1a1b10af4766","Type":"ContainerStarted","Data":"7fadb41b300cf3b00f9dfcf23c909d7af09e0fd9025944c704c0c3d21824bff7"} Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.014810 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerStarted","Data":"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684"} Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.030103 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d7dddfc8-58mn5" podStartSLOduration=2.030085094 podStartE2EDuration="2.030085094s" podCreationTimestamp="2025-11-23 06:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:58:55.02744346 +0000 UTC m=+787.335956223" watchObservedRunningTime="2025-11-23 06:58:55.030085094 +0000 UTC m=+787.338597857" Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.046066 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhpxf" podStartSLOduration=2.625316209 podStartE2EDuration="4.046044831s" podCreationTimestamp="2025-11-23 06:58:51 +0000 UTC" firstStartedPulling="2025-11-23 06:58:52.976349099 +0000 UTC m=+785.284861872" lastFinishedPulling="2025-11-23 06:58:54.397077721 +0000 UTC m=+786.705590494" observedRunningTime="2025-11-23 06:58:55.044627536 +0000 UTC m=+787.353140329" watchObservedRunningTime="2025-11-23 06:58:55.046044831 +0000 UTC m=+787.354557604" Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.585351 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:55 crc kubenswrapper[4988]: I1123 06:58:55.626036 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:57 crc kubenswrapper[4988]: I1123 06:58:57.034891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" event={"ID":"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2","Type":"ContainerStarted","Data":"66b4979fd6c239554d89319c5da4a55f9435538bc99e994d7e3623fce77ca893"} Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.051092 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" event={"ID":"69fa7667-f139-4534-b39c-ac6c41f078c9","Type":"ContainerStarted","Data":"5c2e24fd0bcf7eb78ad3a9d1add0a36cfc0559a94e65d21a798ec7b207d6bd24"} Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.051745 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.053561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" event={"ID":"deca23f5-716a-4d4e-88ae-dd9315c77268","Type":"ContainerStarted","Data":"34851cedea4c24fd61379516473dabcab2b50f3fc38e4ef69e306a3bd999f577"} Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.055503 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.056019 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jfwxm" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="registry-server" containerID="cri-o://ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8" gracePeriod=2 Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.060138 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2vt9j" event={"ID":"026e45ed-f994-494d-ae7a-32e215f95cd2","Type":"ContainerStarted","Data":"732ed4d8a4dcd9e007b1fb66507e0bac440bc942a41f62add1cdc202eb105769"} Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.060874 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.087240 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" podStartSLOduration=2.740373718 podStartE2EDuration="5.087168666s" podCreationTimestamp="2025-11-23 06:58:53 +0000 UTC" firstStartedPulling="2025-11-23 06:58:54.457687808 +0000 UTC m=+786.766200571" lastFinishedPulling="2025-11-23 06:58:56.804482726 +0000 UTC m=+789.112995519" observedRunningTime="2025-11-23 06:58:58.084903421 +0000 UTC m=+790.393416234" watchObservedRunningTime="2025-11-23 06:58:58.087168666 +0000 UTC m=+790.395681469" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.125015 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-wlvj5" podStartSLOduration=2.269053509 podStartE2EDuration="5.124969461s" podCreationTimestamp="2025-11-23 06:58:53 +0000 UTC" firstStartedPulling="2025-11-23 06:58:53.944729571 +0000 UTC m=+786.253242334" lastFinishedPulling="2025-11-23 06:58:56.800645483 +0000 UTC m=+789.109158286" observedRunningTime="2025-11-23 06:58:58.114300753 +0000 UTC m=+790.422813596" watchObservedRunningTime="2025-11-23 06:58:58.124969461 +0000 UTC m=+790.433482234" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.147755 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2vt9j" podStartSLOduration=1.788820314 podStartE2EDuration="5.147735562s" podCreationTimestamp="2025-11-23 06:58:53 +0000 UTC" firstStartedPulling="2025-11-23 06:58:53.446150042 +0000 UTC m=+785.754662805" lastFinishedPulling="2025-11-23 06:58:56.80506525 +0000 UTC m=+789.113578053" observedRunningTime="2025-11-23 06:58:58.144172086 +0000 UTC m=+790.452684859" watchObservedRunningTime="2025-11-23 06:58:58.147735562 +0000 UTC m=+790.456248335" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.461814 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.583826 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9hrl\" (UniqueName: \"kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl\") pod \"1d47121c-9d62-4fd5-bf28-a30c9f716116\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.583992 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities\") pod \"1d47121c-9d62-4fd5-bf28-a30c9f716116\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.584060 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content\") pod \"1d47121c-9d62-4fd5-bf28-a30c9f716116\" (UID: \"1d47121c-9d62-4fd5-bf28-a30c9f716116\") " Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.585233 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities" (OuterVolumeSpecName: "utilities") pod "1d47121c-9d62-4fd5-bf28-a30c9f716116" (UID: "1d47121c-9d62-4fd5-bf28-a30c9f716116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.589497 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl" (OuterVolumeSpecName: "kube-api-access-x9hrl") pod "1d47121c-9d62-4fd5-bf28-a30c9f716116" (UID: "1d47121c-9d62-4fd5-bf28-a30c9f716116"). InnerVolumeSpecName "kube-api-access-x9hrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.680567 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d47121c-9d62-4fd5-bf28-a30c9f716116" (UID: "1d47121c-9d62-4fd5-bf28-a30c9f716116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.686199 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9hrl\" (UniqueName: \"kubernetes.io/projected/1d47121c-9d62-4fd5-bf28-a30c9f716116-kube-api-access-x9hrl\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.686228 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:58 crc kubenswrapper[4988]: I1123 06:58:58.686239 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d47121c-9d62-4fd5-bf28-a30c9f716116-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.067963 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerID="ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8" exitCode=0 Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.068030 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfwxm" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.068026 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerDied","Data":"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8"} Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.069101 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfwxm" event={"ID":"1d47121c-9d62-4fd5-bf28-a30c9f716116","Type":"ContainerDied","Data":"eada038a5bd068b38fdb2d46809f6ad9eaf7402afa81fe51b1ec79ea99625a0d"} Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.069123 4988 scope.go:117] "RemoveContainer" containerID="ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.104538 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.108811 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jfwxm"] Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.192957 4988 scope.go:117] "RemoveContainer" containerID="f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.215157 4988 scope.go:117] "RemoveContainer" containerID="ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.647622 4988 scope.go:117] "RemoveContainer" containerID="ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8" Nov 23 06:58:59 crc kubenswrapper[4988]: E1123 06:58:59.648322 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8\": container with ID starting with ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8 not found: ID does not exist" containerID="ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.648397 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8"} err="failed to get container status \"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8\": rpc error: code = NotFound desc = could not find container \"ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8\": container with ID starting with ab4343f05b50126bbe80f5263f783587c966b5a1f7bf354134b062ac53acacc8 not found: ID does not exist" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.648438 4988 scope.go:117] "RemoveContainer" containerID="f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721" Nov 23 06:58:59 crc kubenswrapper[4988]: E1123 06:58:59.648905 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721\": container with ID starting with f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721 not found: ID does not exist" containerID="f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.648986 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721"} err="failed to get container status \"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721\": rpc error: code = NotFound desc = could not find container \"f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721\": container with ID starting with f97a293ddb64c6984d5243ab27b97123225ac94ae4d84e4f34591db845051721 not found: ID does not exist" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.649021 4988 scope.go:117] "RemoveContainer" containerID="ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99" Nov 23 06:58:59 crc kubenswrapper[4988]: E1123 06:58:59.649383 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99\": container with ID starting with ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99 not found: ID does not exist" containerID="ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99" Nov 23 06:58:59 crc kubenswrapper[4988]: I1123 06:58:59.649409 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99"} err="failed to get container status \"ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99\": rpc error: code = NotFound desc = could not find container \"ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99\": container with ID starting with ed66b2b0d7f7a2ae4c217f4ca407a7e1bf89a556521df3ee2361e4d0323f5e99 not found: ID does not exist" Nov 23 06:59:00 crc kubenswrapper[4988]: I1123 06:59:00.077652 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" event={"ID":"fe77a552-6fcd-45d9-9d61-e242fdb0c4a2","Type":"ContainerStarted","Data":"3862b11caac2bd8d04a054a1fa40df4265a27ce87d2a3877a25f8e4584a88b3d"} Nov 23 06:59:00 crc kubenswrapper[4988]: I1123 06:59:00.101430 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-vwth7" podStartSLOduration=1.274082294 podStartE2EDuration="7.101366123s" podCreationTimestamp="2025-11-23 06:58:53 +0000 UTC" firstStartedPulling="2025-11-23 06:58:53.822267267 +0000 UTC m=+786.130780030" lastFinishedPulling="2025-11-23 06:58:59.649551096 +0000 UTC m=+791.958063859" observedRunningTime="2025-11-23 06:59:00.099923688 +0000 UTC m=+792.408436461" watchObservedRunningTime="2025-11-23 06:59:00.101366123 +0000 UTC m=+792.409878896" Nov 23 06:59:00 crc kubenswrapper[4988]: I1123 06:59:00.507928 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" path="/var/lib/kubelet/pods/1d47121c-9d62-4fd5-bf28-a30c9f716116/volumes" Nov 23 06:59:02 crc kubenswrapper[4988]: I1123 06:59:02.216098 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:02 crc kubenswrapper[4988]: I1123 06:59:02.216651 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:02 crc kubenswrapper[4988]: I1123 06:59:02.288612 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.179458 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.250052 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.447556 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2vt9j" Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.759863 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.759941 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:59:03 crc kubenswrapper[4988]: I1123 06:59:03.768614 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:59:04 crc kubenswrapper[4988]: I1123 06:59:04.114761 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d7dddfc8-58mn5" Nov 23 06:59:04 crc kubenswrapper[4988]: I1123 06:59:04.177527 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.114701 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhpxf" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="registry-server" containerID="cri-o://9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684" gracePeriod=2 Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.500973 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.609211 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities\") pod \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.609296 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content\") pod \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.609396 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2pkk\" (UniqueName: \"kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk\") pod \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\" (UID: \"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4\") " Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.611033 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities" (OuterVolumeSpecName: "utilities") pod "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" (UID: "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.615088 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk" (OuterVolumeSpecName: "kube-api-access-h2pkk") pod "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" (UID: "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4"). InnerVolumeSpecName "kube-api-access-h2pkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.645039 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" (UID: "2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.710892 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2pkk\" (UniqueName: \"kubernetes.io/projected/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-kube-api-access-h2pkk\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.710943 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:05 crc kubenswrapper[4988]: I1123 06:59:05.710966 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.127884 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerID="9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684" exitCode=0 Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.127940 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerDied","Data":"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684"} Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.127994 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhpxf" event={"ID":"2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4","Type":"ContainerDied","Data":"833bae0da33f9db2aaf111229299b7ce36e3eb25586f0a199402b78dc75463a2"} Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.128027 4988 scope.go:117] "RemoveContainer" containerID="9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.128035 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhpxf" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.160858 4988 scope.go:117] "RemoveContainer" containerID="be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.176347 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.182382 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhpxf"] Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.194110 4988 scope.go:117] "RemoveContainer" containerID="4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.222652 4988 scope.go:117] "RemoveContainer" containerID="9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684" Nov 23 06:59:06 crc kubenswrapper[4988]: E1123 06:59:06.223254 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684\": container with ID starting with 9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684 not found: ID does not exist" containerID="9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.223295 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684"} err="failed to get container status \"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684\": rpc error: code = NotFound desc = could not find container \"9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684\": container with ID starting with 9d027424d0f1f033b215a01799fab46b3eb5a924c4abd46e28424011dfd36684 not found: ID does not exist" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.223323 4988 scope.go:117] "RemoveContainer" containerID="be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2" Nov 23 06:59:06 crc kubenswrapper[4988]: E1123 06:59:06.223828 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2\": container with ID starting with be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2 not found: ID does not exist" containerID="be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.223857 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2"} err="failed to get container status \"be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2\": rpc error: code = NotFound desc = could not find container \"be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2\": container with ID starting with be32c44b675a083e3a32b18d72b890bd63c1da024453a54071e346bb010f54b2 not found: ID does not exist" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.223876 4988 scope.go:117] "RemoveContainer" containerID="4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4" Nov 23 06:59:06 crc kubenswrapper[4988]: E1123 06:59:06.224386 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4\": container with ID starting with 4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4 not found: ID does not exist" containerID="4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.224554 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4"} err="failed to get container status \"4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4\": rpc error: code = NotFound desc = could not find container \"4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4\": container with ID starting with 4343a481bcbbea6a1663ebd1cfd00b10bd52a61f57b15f95ce5cfdfa5b5ab2d4 not found: ID does not exist" Nov 23 06:59:06 crc kubenswrapper[4988]: I1123 06:59:06.509699 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" path="/var/lib/kubelet/pods/2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4/volumes" Nov 23 06:59:13 crc kubenswrapper[4988]: I1123 06:59:13.994255 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-ldrdz" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.713162 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg"] Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714261 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="extract-content" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714283 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="extract-content" Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714304 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="extract-utilities" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714318 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="extract-utilities" Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714344 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="extract-utilities" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714360 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="extract-utilities" Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714381 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714394 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714416 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="extract-content" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714429 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="extract-content" Nov 23 06:59:27 crc kubenswrapper[4988]: E1123 06:59:27.714452 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714466 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714655 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d47121c-9d62-4fd5-bf28-a30c9f716116" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.714680 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4c74bc-c32c-4114-b2e7-14fe47b0c0a4" containerName="registry-server" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.716012 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.719871 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.737666 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.737873 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2xb\" (UniqueName: \"kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.737981 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.738170 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg"] Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.839120 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2xb\" (UniqueName: \"kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.839185 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.839258 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.839828 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.839871 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:27 crc kubenswrapper[4988]: I1123 06:59:27.861277 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2xb\" (UniqueName: \"kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:28 crc kubenswrapper[4988]: I1123 06:59:28.033517 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:28 crc kubenswrapper[4988]: I1123 06:59:28.488643 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg"] Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.232004 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gfkdg" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerName="console" containerID="cri-o://9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e" gracePeriod=15 Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.308575 4988 generic.go:334] "Generic (PLEG): container finished" podID="61d7d35a-02bc-4b40-848b-e773652f2691" containerID="80f5bb25f09998aadb4297e74b8032b7fd04632e2e70e6ab293b8582cb12b9bf" exitCode=0 Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.308655 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" event={"ID":"61d7d35a-02bc-4b40-848b-e773652f2691","Type":"ContainerDied","Data":"80f5bb25f09998aadb4297e74b8032b7fd04632e2e70e6ab293b8582cb12b9bf"} Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.308725 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" event={"ID":"61d7d35a-02bc-4b40-848b-e773652f2691","Type":"ContainerStarted","Data":"9fd21dd32775788f91266f1b97c03fd300da2b22e1152a30888abce75ae4952a"} Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.672990 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gfkdg_86592f41-a930-4436-96a8-4676e4bbf9bf/console/0.log" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.673070 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867447 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867575 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867610 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867672 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljk5j\" (UniqueName: \"kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867777 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.867862 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config\") pod \"86592f41-a930-4436-96a8-4676e4bbf9bf\" (UID: \"86592f41-a930-4436-96a8-4676e4bbf9bf\") " Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.868738 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca" (OuterVolumeSpecName: "service-ca") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.869011 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.869072 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config" (OuterVolumeSpecName: "console-config") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.869806 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.877501 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j" (OuterVolumeSpecName: "kube-api-access-ljk5j") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "kube-api-access-ljk5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.878053 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:29 crc kubenswrapper[4988]: I1123 06:59:29.882177 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86592f41-a930-4436-96a8-4676e4bbf9bf" (UID: "86592f41-a930-4436-96a8-4676e4bbf9bf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012181 4988 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012254 4988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012272 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljk5j\" (UniqueName: \"kubernetes.io/projected/86592f41-a930-4436-96a8-4676e4bbf9bf-kube-api-access-ljk5j\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012293 4988 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012314 4988 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86592f41-a930-4436-96a8-4676e4bbf9bf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012331 4988 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.012350 4988 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86592f41-a930-4436-96a8-4676e4bbf9bf-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.319813 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gfkdg_86592f41-a930-4436-96a8-4676e4bbf9bf/console/0.log" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.319900 4988 generic.go:334] "Generic (PLEG): container finished" podID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerID="9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e" exitCode=2 Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.319954 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gfkdg" event={"ID":"86592f41-a930-4436-96a8-4676e4bbf9bf","Type":"ContainerDied","Data":"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e"} Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.320006 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gfkdg" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.320034 4988 scope.go:117] "RemoveContainer" containerID="9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.320013 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gfkdg" event={"ID":"86592f41-a930-4436-96a8-4676e4bbf9bf","Type":"ContainerDied","Data":"088c0fb85307af24434e2b84d55a84a7569e7be833a56e737144a89412860dbb"} Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.371698 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.379051 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gfkdg"] Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.389564 4988 scope.go:117] "RemoveContainer" containerID="9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e" Nov 23 06:59:30 crc kubenswrapper[4988]: E1123 06:59:30.390590 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e\": container with ID starting with 9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e not found: ID does not exist" containerID="9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.390809 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e"} err="failed to get container status \"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e\": rpc error: code = NotFound desc = could not find container \"9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e\": container with ID starting with 9757160508ca276f6dd75e38ec6360efc198317fad0d604fc194d6abd5d32a6e not found: ID does not exist" Nov 23 06:59:30 crc kubenswrapper[4988]: I1123 06:59:30.507068 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" path="/var/lib/kubelet/pods/86592f41-a930-4436-96a8-4676e4bbf9bf/volumes" Nov 23 06:59:31 crc kubenswrapper[4988]: I1123 06:59:31.326564 4988 generic.go:334] "Generic (PLEG): container finished" podID="61d7d35a-02bc-4b40-848b-e773652f2691" containerID="27b100db3c45ddc45d1c39ddac3d4a7c1d4fe81f4c2360322456fd372fe6c685" exitCode=0 Nov 23 06:59:31 crc kubenswrapper[4988]: I1123 06:59:31.326616 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" event={"ID":"61d7d35a-02bc-4b40-848b-e773652f2691","Type":"ContainerDied","Data":"27b100db3c45ddc45d1c39ddac3d4a7c1d4fe81f4c2360322456fd372fe6c685"} Nov 23 06:59:32 crc kubenswrapper[4988]: I1123 06:59:32.348753 4988 generic.go:334] "Generic (PLEG): container finished" podID="61d7d35a-02bc-4b40-848b-e773652f2691" containerID="5c613303e6af7c1b86cecba84177b092aded78814dd422b732d74de62248732a" exitCode=0 Nov 23 06:59:32 crc kubenswrapper[4988]: I1123 06:59:32.348880 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" event={"ID":"61d7d35a-02bc-4b40-848b-e773652f2691","Type":"ContainerDied","Data":"5c613303e6af7c1b86cecba84177b092aded78814dd422b732d74de62248732a"} Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.681021 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.761616 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle\") pod \"61d7d35a-02bc-4b40-848b-e773652f2691\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.761663 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2xb\" (UniqueName: \"kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb\") pod \"61d7d35a-02bc-4b40-848b-e773652f2691\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.761705 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util\") pod \"61d7d35a-02bc-4b40-848b-e773652f2691\" (UID: \"61d7d35a-02bc-4b40-848b-e773652f2691\") " Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.763393 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle" (OuterVolumeSpecName: "bundle") pod "61d7d35a-02bc-4b40-848b-e773652f2691" (UID: "61d7d35a-02bc-4b40-848b-e773652f2691"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.772538 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb" (OuterVolumeSpecName: "kube-api-access-tn2xb") pod "61d7d35a-02bc-4b40-848b-e773652f2691" (UID: "61d7d35a-02bc-4b40-848b-e773652f2691"). InnerVolumeSpecName "kube-api-access-tn2xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.784541 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util" (OuterVolumeSpecName: "util") pod "61d7d35a-02bc-4b40-848b-e773652f2691" (UID: "61d7d35a-02bc-4b40-848b-e773652f2691"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.862935 4988 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.862989 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2xb\" (UniqueName: \"kubernetes.io/projected/61d7d35a-02bc-4b40-848b-e773652f2691-kube-api-access-tn2xb\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:33 crc kubenswrapper[4988]: I1123 06:59:33.863006 4988 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61d7d35a-02bc-4b40-848b-e773652f2691-util\") on node \"crc\" DevicePath \"\"" Nov 23 06:59:34 crc kubenswrapper[4988]: I1123 06:59:34.364405 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" event={"ID":"61d7d35a-02bc-4b40-848b-e773652f2691","Type":"ContainerDied","Data":"9fd21dd32775788f91266f1b97c03fd300da2b22e1152a30888abce75ae4952a"} Nov 23 06:59:34 crc kubenswrapper[4988]: I1123 06:59:34.364483 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd21dd32775788f91266f1b97c03fd300da2b22e1152a30888abce75ae4952a" Nov 23 06:59:34 crc kubenswrapper[4988]: I1123 06:59:34.364527 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.983662 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg"] Nov 23 06:59:42 crc kubenswrapper[4988]: E1123 06:59:42.984392 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerName="console" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984403 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerName="console" Nov 23 06:59:42 crc kubenswrapper[4988]: E1123 06:59:42.984418 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="extract" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984424 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="extract" Nov 23 06:59:42 crc kubenswrapper[4988]: E1123 06:59:42.984435 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="pull" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984442 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="pull" Nov 23 06:59:42 crc kubenswrapper[4988]: E1123 06:59:42.984450 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="util" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984455 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="util" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984564 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d7d35a-02bc-4b40-848b-e773652f2691" containerName="extract" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984572 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="86592f41-a930-4436-96a8-4676e4bbf9bf" containerName="console" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.984946 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.987000 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ppg4d" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.987427 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.987487 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.987553 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 23 06:59:42 crc kubenswrapper[4988]: I1123 06:59:42.987867 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.001503 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg"] Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.082622 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4dj\" (UniqueName: \"kubernetes.io/projected/ffe4a76d-1690-40e0-acc7-56a52602cc77-kube-api-access-kd4dj\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.082674 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-apiservice-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.082702 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-webhook-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.184283 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4dj\" (UniqueName: \"kubernetes.io/projected/ffe4a76d-1690-40e0-acc7-56a52602cc77-kube-api-access-kd4dj\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.184552 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-apiservice-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.184605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-webhook-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.191181 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-webhook-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.198885 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ffe4a76d-1690-40e0-acc7-56a52602cc77-apiservice-cert\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.207881 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4dj\" (UniqueName: \"kubernetes.io/projected/ffe4a76d-1690-40e0-acc7-56a52602cc77-kube-api-access-kd4dj\") pod \"metallb-operator-controller-manager-79595c987c-ws8kg\" (UID: \"ffe4a76d-1690-40e0-acc7-56a52602cc77\") " pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.236490 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z"] Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.237360 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.239577 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.240011 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.241403 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-zwcln" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.264633 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z"] Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.308909 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.387641 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-apiservice-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.387696 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzhj\" (UniqueName: \"kubernetes.io/projected/c08dbb11-869b-4dec-bb8a-67b5c693cd70-kube-api-access-7hzhj\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.387828 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-webhook-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.488892 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-webhook-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.489342 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-apiservice-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.489367 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzhj\" (UniqueName: \"kubernetes.io/projected/c08dbb11-869b-4dec-bb8a-67b5c693cd70-kube-api-access-7hzhj\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.519293 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-webhook-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.519405 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c08dbb11-869b-4dec-bb8a-67b5c693cd70-apiservice-cert\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.523851 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzhj\" (UniqueName: \"kubernetes.io/projected/c08dbb11-869b-4dec-bb8a-67b5c693cd70-kube-api-access-7hzhj\") pod \"metallb-operator-webhook-server-cffff7689-6qc7z\" (UID: \"c08dbb11-869b-4dec-bb8a-67b5c693cd70\") " pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.558597 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.758394 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg"] Nov 23 06:59:43 crc kubenswrapper[4988]: W1123 06:59:43.768173 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffe4a76d_1690_40e0_acc7_56a52602cc77.slice/crio-ecb9f002870ebad96b825d39ddadc8cde66226a38f3211cc2a5c9533ecfd5f58 WatchSource:0}: Error finding container ecb9f002870ebad96b825d39ddadc8cde66226a38f3211cc2a5c9533ecfd5f58: Status 404 returned error can't find the container with id ecb9f002870ebad96b825d39ddadc8cde66226a38f3211cc2a5c9533ecfd5f58 Nov 23 06:59:43 crc kubenswrapper[4988]: I1123 06:59:43.821927 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z"] Nov 23 06:59:43 crc kubenswrapper[4988]: W1123 06:59:43.835251 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc08dbb11_869b_4dec_bb8a_67b5c693cd70.slice/crio-0d9a9fba9c26937a7bdea5a0acc1a9f62107c594fc89ff13a92471a04be18937 WatchSource:0}: Error finding container 0d9a9fba9c26937a7bdea5a0acc1a9f62107c594fc89ff13a92471a04be18937: Status 404 returned error can't find the container with id 0d9a9fba9c26937a7bdea5a0acc1a9f62107c594fc89ff13a92471a04be18937 Nov 23 06:59:44 crc kubenswrapper[4988]: I1123 06:59:44.426403 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" event={"ID":"c08dbb11-869b-4dec-bb8a-67b5c693cd70","Type":"ContainerStarted","Data":"0d9a9fba9c26937a7bdea5a0acc1a9f62107c594fc89ff13a92471a04be18937"} Nov 23 06:59:44 crc kubenswrapper[4988]: I1123 06:59:44.428580 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" event={"ID":"ffe4a76d-1690-40e0-acc7-56a52602cc77","Type":"ContainerStarted","Data":"ecb9f002870ebad96b825d39ddadc8cde66226a38f3211cc2a5c9533ecfd5f58"} Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.461253 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" event={"ID":"c08dbb11-869b-4dec-bb8a-67b5c693cd70","Type":"ContainerStarted","Data":"35e80c194a7bfd9cd1b273a5c940d423dbdf544391ebea4fe267738ad97762f7"} Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.462022 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.464154 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" event={"ID":"ffe4a76d-1690-40e0-acc7-56a52602cc77","Type":"ContainerStarted","Data":"decfd3b68c748e9c42488e02425b382283fe25526c97ef17c10793cdb74d3a27"} Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.464316 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.485436 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" podStartSLOduration=1.856080231 podStartE2EDuration="6.485410863s" podCreationTimestamp="2025-11-23 06:59:43 +0000 UTC" firstStartedPulling="2025-11-23 06:59:43.837410191 +0000 UTC m=+836.145922954" lastFinishedPulling="2025-11-23 06:59:48.466740823 +0000 UTC m=+840.775253586" observedRunningTime="2025-11-23 06:59:49.482032797 +0000 UTC m=+841.790545570" watchObservedRunningTime="2025-11-23 06:59:49.485410863 +0000 UTC m=+841.793923646" Nov 23 06:59:49 crc kubenswrapper[4988]: I1123 06:59:49.511542 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" podStartSLOduration=2.824570981 podStartE2EDuration="7.511520257s" podCreationTimestamp="2025-11-23 06:59:42 +0000 UTC" firstStartedPulling="2025-11-23 06:59:43.775495353 +0000 UTC m=+836.084008116" lastFinishedPulling="2025-11-23 06:59:48.462444629 +0000 UTC m=+840.770957392" observedRunningTime="2025-11-23 06:59:49.506998552 +0000 UTC m=+841.815511335" watchObservedRunningTime="2025-11-23 06:59:49.511520257 +0000 UTC m=+841.820033040" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.127806 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd"] Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.129533 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.131629 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.131766 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.137136 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd"] Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.241578 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.241949 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.242185 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58fr\" (UniqueName: \"kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.343410 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58fr\" (UniqueName: \"kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.343517 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.343550 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.344972 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.352103 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.386639 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58fr\" (UniqueName: \"kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr\") pod \"collect-profiles-29398020-kxgmd\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.489402 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:00 crc kubenswrapper[4988]: I1123 07:00:00.932756 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd"] Nov 23 07:00:01 crc kubenswrapper[4988]: I1123 07:00:01.546231 4988 generic.go:334] "Generic (PLEG): container finished" podID="df1530b5-0204-4087-b649-5bdc2c82d76d" containerID="0eb43bd1b1ab4f381b4ce368f02272f9c8ad2cca95069b9b781db0a4b79dc116" exitCode=0 Nov 23 07:00:01 crc kubenswrapper[4988]: I1123 07:00:01.546283 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" event={"ID":"df1530b5-0204-4087-b649-5bdc2c82d76d","Type":"ContainerDied","Data":"0eb43bd1b1ab4f381b4ce368f02272f9c8ad2cca95069b9b781db0a4b79dc116"} Nov 23 07:00:01 crc kubenswrapper[4988]: I1123 07:00:01.546555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" event={"ID":"df1530b5-0204-4087-b649-5bdc2c82d76d","Type":"ContainerStarted","Data":"d40d498896cd474b0cd3bb473229cb09a340b64ead52479575a82692cd086cc2"} Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.802052 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.977385 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume\") pod \"df1530b5-0204-4087-b649-5bdc2c82d76d\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.977570 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58fr\" (UniqueName: \"kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr\") pod \"df1530b5-0204-4087-b649-5bdc2c82d76d\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.977661 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume\") pod \"df1530b5-0204-4087-b649-5bdc2c82d76d\" (UID: \"df1530b5-0204-4087-b649-5bdc2c82d76d\") " Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.979230 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume" (OuterVolumeSpecName: "config-volume") pod "df1530b5-0204-4087-b649-5bdc2c82d76d" (UID: "df1530b5-0204-4087-b649-5bdc2c82d76d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:02 crc kubenswrapper[4988]: I1123 07:00:02.985603 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df1530b5-0204-4087-b649-5bdc2c82d76d" (UID: "df1530b5-0204-4087-b649-5bdc2c82d76d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.011281 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr" (OuterVolumeSpecName: "kube-api-access-f58fr") pod "df1530b5-0204-4087-b649-5bdc2c82d76d" (UID: "df1530b5-0204-4087-b649-5bdc2c82d76d"). InnerVolumeSpecName "kube-api-access-f58fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.078791 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df1530b5-0204-4087-b649-5bdc2c82d76d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.078828 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df1530b5-0204-4087-b649-5bdc2c82d76d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.078838 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58fr\" (UniqueName: \"kubernetes.io/projected/df1530b5-0204-4087-b649-5bdc2c82d76d-kube-api-access-f58fr\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.560157 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" event={"ID":"df1530b5-0204-4087-b649-5bdc2c82d76d","Type":"ContainerDied","Data":"d40d498896cd474b0cd3bb473229cb09a340b64ead52479575a82692cd086cc2"} Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.560222 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d40d498896cd474b0cd3bb473229cb09a340b64ead52479575a82692cd086cc2" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.560277 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd" Nov 23 07:00:03 crc kubenswrapper[4988]: I1123 07:00:03.565209 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-cffff7689-6qc7z" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.312420 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-79595c987c-ws8kg" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.853012 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:23 crc kubenswrapper[4988]: E1123 07:00:23.853336 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1530b5-0204-4087-b649-5bdc2c82d76d" containerName="collect-profiles" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.853354 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1530b5-0204-4087-b649-5bdc2c82d76d" containerName="collect-profiles" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.853501 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1530b5-0204-4087-b649-5bdc2c82d76d" containerName="collect-profiles" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.854613 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.873887 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.998101 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.998215 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lks7c\" (UniqueName: \"kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:23 crc kubenswrapper[4988]: I1123 07:00:23.998248 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.069383 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-l95c9"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.072411 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6thck"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.073081 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.073827 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.075103 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.075331 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wdgfv" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.076401 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.076556 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.099794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lks7c\" (UniqueName: \"kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.099893 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.100372 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.100442 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.100682 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.105650 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6thck"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.136893 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lks7c\" (UniqueName: \"kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c\") pod \"community-operators-zvmz7\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.142106 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6flql"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.143037 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.145608 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.145627 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.146055 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.146374 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5v8kz" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.161919 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-9rgvp"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.165174 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.168974 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.173915 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.188210 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-9rgvp"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206765 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206825 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-startup\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206850 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-sockets\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206882 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-conf\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206907 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjzs\" (UniqueName: \"kubernetes.io/projected/7664e29b-309a-4f06-bfd5-6fc10d70479e-kube-api-access-mrjzs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206930 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-reloader\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.206962 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/101dd547-04aa-4e7d-9464-14100da79eed-cert\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.207010 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics-certs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.207043 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjzzx\" (UniqueName: \"kubernetes.io/projected/101dd547-04aa-4e7d-9464-14100da79eed-kube-api-access-xjzzx\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.308822 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics-certs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309208 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjzzx\" (UniqueName: \"kubernetes.io/projected/101dd547-04aa-4e7d-9464-14100da79eed-kube-api-access-xjzzx\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309232 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-metrics-certs\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309439 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4f9b\" (UniqueName: \"kubernetes.io/projected/1c39f006-2177-4afd-a5d5-a869f8aabad6-kube-api-access-z4f9b\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309485 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-cert\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309515 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309533 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-startup\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309549 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13cf1d15-67f3-4424-a807-f508e85f2a26-metallb-excludel2\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309568 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slkmd\" (UniqueName: \"kubernetes.io/projected/13cf1d15-67f3-4424-a807-f508e85f2a26-kube-api-access-slkmd\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309585 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-sockets\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-conf\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309624 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjzs\" (UniqueName: \"kubernetes.io/projected/7664e29b-309a-4f06-bfd5-6fc10d70479e-kube-api-access-mrjzs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309639 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-reloader\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/101dd547-04aa-4e7d-9464-14100da79eed-cert\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.309678 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.311050 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.311713 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-conf\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.311788 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-startup\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.312012 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-frr-sockets\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.312029 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7664e29b-309a-4f06-bfd5-6fc10d70479e-reloader\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.321061 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7664e29b-309a-4f06-bfd5-6fc10d70479e-metrics-certs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.334846 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/101dd547-04aa-4e7d-9464-14100da79eed-cert\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.345775 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjzs\" (UniqueName: \"kubernetes.io/projected/7664e29b-309a-4f06-bfd5-6fc10d70479e-kube-api-access-mrjzs\") pod \"frr-k8s-l95c9\" (UID: \"7664e29b-309a-4f06-bfd5-6fc10d70479e\") " pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.367789 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjzzx\" (UniqueName: \"kubernetes.io/projected/101dd547-04aa-4e7d-9464-14100da79eed-kube-api-access-xjzzx\") pod \"frr-k8s-webhook-server-6998585d5-6thck\" (UID: \"101dd547-04aa-4e7d-9464-14100da79eed\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.392561 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.404582 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.410966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-metrics-certs\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411008 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4f9b\" (UniqueName: \"kubernetes.io/projected/1c39f006-2177-4afd-a5d5-a869f8aabad6-kube-api-access-z4f9b\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411037 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-cert\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411077 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13cf1d15-67f3-4424-a807-f508e85f2a26-metallb-excludel2\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411100 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slkmd\" (UniqueName: \"kubernetes.io/projected/13cf1d15-67f3-4424-a807-f508e85f2a26-kube-api-access-slkmd\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411137 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.411162 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.411473 4988 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.411522 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs podName:1c39f006-2177-4afd-a5d5-a869f8aabad6 nodeName:}" failed. No retries permitted until 2025-11-23 07:00:24.911506237 +0000 UTC m=+877.220019000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs") pod "controller-6c7b4b5f48-9rgvp" (UID: "1c39f006-2177-4afd-a5d5-a869f8aabad6") : secret "controller-certs-secret" not found Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.416303 4988 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.416367 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist podName:13cf1d15-67f3-4424-a807-f508e85f2a26 nodeName:}" failed. No retries permitted until 2025-11-23 07:00:24.91635254 +0000 UTC m=+877.224865303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist") pod "speaker-6flql" (UID: "13cf1d15-67f3-4424-a807-f508e85f2a26") : secret "metallb-memberlist" not found Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.416456 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/13cf1d15-67f3-4424-a807-f508e85f2a26-metallb-excludel2\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.417628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-metrics-certs\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.421526 4988 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.434302 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-cert\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.456817 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4f9b\" (UniqueName: \"kubernetes.io/projected/1c39f006-2177-4afd-a5d5-a869f8aabad6-kube-api-access-z4f9b\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.488623 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slkmd\" (UniqueName: \"kubernetes.io/projected/13cf1d15-67f3-4424-a807-f508e85f2a26-kube-api-access-slkmd\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.703042 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"ceb9d74ccd12cddd696c2c567a44384d39f5858b6c5fe1e12f0fcb307f361ffa"} Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.758911 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6thck"] Nov 23 07:00:24 crc kubenswrapper[4988]: W1123 07:00:24.766980 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod101dd547_04aa_4e7d_9464_14100da79eed.slice/crio-1fd776530f7d327ef3ea8bf94816bdda0daeabe6d33fe4ada12fe33b67618c21 WatchSource:0}: Error finding container 1fd776530f7d327ef3ea8bf94816bdda0daeabe6d33fe4ada12fe33b67618c21: Status 404 returned error can't find the container with id 1fd776530f7d327ef3ea8bf94816bdda0daeabe6d33fe4ada12fe33b67618c21 Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.782887 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.918434 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.918860 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.918677 4988 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 07:00:24 crc kubenswrapper[4988]: E1123 07:00:24.918976 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist podName:13cf1d15-67f3-4424-a807-f508e85f2a26 nodeName:}" failed. No retries permitted until 2025-11-23 07:00:25.918954741 +0000 UTC m=+878.227467504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist") pod "speaker-6flql" (UID: "13cf1d15-67f3-4424-a807-f508e85f2a26") : secret "metallb-memberlist" not found Nov 23 07:00:24 crc kubenswrapper[4988]: I1123 07:00:24.924826 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1c39f006-2177-4afd-a5d5-a869f8aabad6-metrics-certs\") pod \"controller-6c7b4b5f48-9rgvp\" (UID: \"1c39f006-2177-4afd-a5d5-a869f8aabad6\") " pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.086149 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.325798 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-9rgvp"] Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.711431 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerID="b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b" exitCode=0 Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.711483 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerDied","Data":"b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.711528 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerStarted","Data":"6548af36f279e73105ec81caac42140bd8e46fb6c17f0d96b3f0543385517583"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.712852 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" event={"ID":"101dd547-04aa-4e7d-9464-14100da79eed","Type":"ContainerStarted","Data":"1fd776530f7d327ef3ea8bf94816bdda0daeabe6d33fe4ada12fe33b67618c21"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.719143 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9rgvp" event={"ID":"1c39f006-2177-4afd-a5d5-a869f8aabad6","Type":"ContainerStarted","Data":"a94fbf3e08e122450721f4132a032279659652dc84a2c01f87e5cd8310b7efa5"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.719180 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9rgvp" event={"ID":"1c39f006-2177-4afd-a5d5-a869f8aabad6","Type":"ContainerStarted","Data":"b751f1c32f653a11c078a3f5c38d39a9df2ab0d6443a9f607d81ffd239c881b1"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.719203 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-9rgvp" event={"ID":"1c39f006-2177-4afd-a5d5-a869f8aabad6","Type":"ContainerStarted","Data":"2672fe3eeecbb9266450632acf0ef6674a286933b3ed1d0baeb6605b2e5fbe5b"} Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.719303 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.936545 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.943156 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/13cf1d15-67f3-4424-a807-f508e85f2a26-memberlist\") pod \"speaker-6flql\" (UID: \"13cf1d15-67f3-4424-a807-f508e85f2a26\") " pod="metallb-system/speaker-6flql" Nov 23 07:00:25 crc kubenswrapper[4988]: I1123 07:00:25.972362 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6flql" Nov 23 07:00:26 crc kubenswrapper[4988]: W1123 07:00:26.003646 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cf1d15_67f3_4424_a807_f508e85f2a26.slice/crio-2b15b56259c350cac10d53518acd622067ab910ae365c24ab2f3cc7ab5e9db26 WatchSource:0}: Error finding container 2b15b56259c350cac10d53518acd622067ab910ae365c24ab2f3cc7ab5e9db26: Status 404 returned error can't find the container with id 2b15b56259c350cac10d53518acd622067ab910ae365c24ab2f3cc7ab5e9db26 Nov 23 07:00:26 crc kubenswrapper[4988]: I1123 07:00:26.741796 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerStarted","Data":"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2"} Nov 23 07:00:26 crc kubenswrapper[4988]: I1123 07:00:26.745365 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6flql" event={"ID":"13cf1d15-67f3-4424-a807-f508e85f2a26","Type":"ContainerStarted","Data":"0778b0a453e931e9d28bdda0672d2ec979b99a8f816aa249e4683e57af948a59"} Nov 23 07:00:26 crc kubenswrapper[4988]: I1123 07:00:26.745423 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6flql" event={"ID":"13cf1d15-67f3-4424-a807-f508e85f2a26","Type":"ContainerStarted","Data":"2b15b56259c350cac10d53518acd622067ab910ae365c24ab2f3cc7ab5e9db26"} Nov 23 07:00:26 crc kubenswrapper[4988]: I1123 07:00:26.761855 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-9rgvp" podStartSLOduration=2.761834938 podStartE2EDuration="2.761834938s" podCreationTimestamp="2025-11-23 07:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:25.759257242 +0000 UTC m=+878.067770015" watchObservedRunningTime="2025-11-23 07:00:26.761834938 +0000 UTC m=+879.070347701" Nov 23 07:00:27 crc kubenswrapper[4988]: I1123 07:00:27.761898 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerID="0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2" exitCode=0 Nov 23 07:00:27 crc kubenswrapper[4988]: I1123 07:00:27.762382 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerDied","Data":"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2"} Nov 23 07:00:27 crc kubenswrapper[4988]: I1123 07:00:27.768169 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6flql" event={"ID":"13cf1d15-67f3-4424-a807-f508e85f2a26","Type":"ContainerStarted","Data":"1aa18b2e6f0932b0bffb0f0e2f9aa2d03f9a7542d5368d57d4d5d1238018b8bd"} Nov 23 07:00:27 crc kubenswrapper[4988]: I1123 07:00:27.768347 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6flql" Nov 23 07:00:27 crc kubenswrapper[4988]: I1123 07:00:27.806573 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6flql" podStartSLOduration=3.8065536939999998 podStartE2EDuration="3.806553694s" podCreationTimestamp="2025-11-23 07:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:00:27.802714787 +0000 UTC m=+880.111227560" watchObservedRunningTime="2025-11-23 07:00:27.806553694 +0000 UTC m=+880.115066457" Nov 23 07:00:28 crc kubenswrapper[4988]: I1123 07:00:28.777384 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerStarted","Data":"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2"} Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.799376 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvmz7" podStartSLOduration=6.281876491 podStartE2EDuration="8.79935741s" podCreationTimestamp="2025-11-23 07:00:23 +0000 UTC" firstStartedPulling="2025-11-23 07:00:25.713770857 +0000 UTC m=+878.022283620" lastFinishedPulling="2025-11-23 07:00:28.231251776 +0000 UTC m=+880.539764539" observedRunningTime="2025-11-23 07:00:28.797422652 +0000 UTC m=+881.105935425" watchObservedRunningTime="2025-11-23 07:00:31.79935741 +0000 UTC m=+884.107870173" Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.804413 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.805796 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.824758 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.930820 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.930887 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bct\" (UniqueName: \"kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:31 crc kubenswrapper[4988]: I1123 07:00:31.930996 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.032580 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.032652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.032685 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bct\" (UniqueName: \"kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.033919 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.034175 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.055464 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bct\" (UniqueName: \"kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct\") pod \"certified-operators-4xmht\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.126575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:32 crc kubenswrapper[4988]: I1123 07:00:32.909870 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:33 crc kubenswrapper[4988]: I1123 07:00:33.072768 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerStarted","Data":"1f1446e5ab5bafa4c35f38e75f7b5557f4b49a04ca354dcc5b549d5c53281808"} Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.083879 4988 generic.go:334] "Generic (PLEG): container finished" podID="7664e29b-309a-4f06-bfd5-6fc10d70479e" containerID="b04797f73fdee670b4d53b0cd38288bee97d6a006f0eb45a7d0e5646b890e6d6" exitCode=0 Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.083984 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerDied","Data":"b04797f73fdee670b4d53b0cd38288bee97d6a006f0eb45a7d0e5646b890e6d6"} Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.088798 4988 generic.go:334] "Generic (PLEG): container finished" podID="25b6e75e-25a0-4942-969c-b2a99a266373" containerID="52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736" exitCode=0 Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.088949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerDied","Data":"52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736"} Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.092062 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" event={"ID":"101dd547-04aa-4e7d-9464-14100da79eed","Type":"ContainerStarted","Data":"eebb8f637f3ef75a3fe0554d38797f80d6f5cadca4378a04356a3156ca91b735"} Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.092552 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.174981 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.175944 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.178370 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" podStartSLOduration=1.363102357 podStartE2EDuration="10.17835604s" podCreationTimestamp="2025-11-23 07:00:24 +0000 UTC" firstStartedPulling="2025-11-23 07:00:24.768623661 +0000 UTC m=+877.077136414" lastFinishedPulling="2025-11-23 07:00:33.583877334 +0000 UTC m=+885.892390097" observedRunningTime="2025-11-23 07:00:34.170762127 +0000 UTC m=+886.479274930" watchObservedRunningTime="2025-11-23 07:00:34.17835604 +0000 UTC m=+886.486868813" Nov 23 07:00:34 crc kubenswrapper[4988]: I1123 07:00:34.233207 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:35 crc kubenswrapper[4988]: I1123 07:00:35.091382 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-9rgvp" Nov 23 07:00:35 crc kubenswrapper[4988]: I1123 07:00:35.103403 4988 generic.go:334] "Generic (PLEG): container finished" podID="7664e29b-309a-4f06-bfd5-6fc10d70479e" containerID="1b6d42af4f96b5a5d2167f55605c6996462cab3b6400d82a79e840a56bd39f5f" exitCode=0 Nov 23 07:00:35 crc kubenswrapper[4988]: I1123 07:00:35.103480 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerDied","Data":"1b6d42af4f96b5a5d2167f55605c6996462cab3b6400d82a79e840a56bd39f5f"} Nov 23 07:00:35 crc kubenswrapper[4988]: I1123 07:00:35.110530 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerStarted","Data":"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051"} Nov 23 07:00:35 crc kubenswrapper[4988]: I1123 07:00:35.181422 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:36 crc kubenswrapper[4988]: I1123 07:00:36.120880 4988 generic.go:334] "Generic (PLEG): container finished" podID="25b6e75e-25a0-4942-969c-b2a99a266373" containerID="2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051" exitCode=0 Nov 23 07:00:36 crc kubenswrapper[4988]: I1123 07:00:36.121007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerDied","Data":"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051"} Nov 23 07:00:36 crc kubenswrapper[4988]: I1123 07:00:36.633311 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:37 crc kubenswrapper[4988]: I1123 07:00:37.129773 4988 generic.go:334] "Generic (PLEG): container finished" podID="7664e29b-309a-4f06-bfd5-6fc10d70479e" containerID="30fe7cd5b794363b00e1d8e4d94f4ca2284e5fe2282864c11ed0df379d6ec7e2" exitCode=0 Nov 23 07:00:37 crc kubenswrapper[4988]: I1123 07:00:37.129876 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerDied","Data":"30fe7cd5b794363b00e1d8e4d94f4ca2284e5fe2282864c11ed0df379d6ec7e2"} Nov 23 07:00:37 crc kubenswrapper[4988]: I1123 07:00:37.132952 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerStarted","Data":"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad"} Nov 23 07:00:37 crc kubenswrapper[4988]: I1123 07:00:37.224219 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xmht" podStartSLOduration=3.489427994 podStartE2EDuration="6.224203754s" podCreationTimestamp="2025-11-23 07:00:31 +0000 UTC" firstStartedPulling="2025-11-23 07:00:34.090815985 +0000 UTC m=+886.399328748" lastFinishedPulling="2025-11-23 07:00:36.825591715 +0000 UTC m=+889.134104508" observedRunningTime="2025-11-23 07:00:37.220294885 +0000 UTC m=+889.528807648" watchObservedRunningTime="2025-11-23 07:00:37.224203754 +0000 UTC m=+889.532716517" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.145711 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"53560ec4545ec5f9edcea6863725874cdaea51c474efe03ac039b4144a891f79"} Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.145863 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvmz7" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="registry-server" containerID="cri-o://26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2" gracePeriod=2 Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.146459 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"bc03aa27f51948c793a3db88522e26897885a88c227d49863d07258b6a36349e"} Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.146607 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"c38fc4a99685aa7d5096dc33c1e7cd26d470e1455954beefb2320fd7815e826f"} Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.146626 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"0933dc9d82bb6dd46d63f30d420e5b04a7884af034fe75b5f5eefc3db6c13f5b"} Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.146643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"2f3a3d3701d843a10917644929244592bde2d02eafabdf69d77339374e0c1a43"} Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.514708 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.656220 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content\") pod \"5e07f374-c82a-4782-9bc0-9e289df776bf\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.656288 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lks7c\" (UniqueName: \"kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c\") pod \"5e07f374-c82a-4782-9bc0-9e289df776bf\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.656354 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities\") pod \"5e07f374-c82a-4782-9bc0-9e289df776bf\" (UID: \"5e07f374-c82a-4782-9bc0-9e289df776bf\") " Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.658042 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities" (OuterVolumeSpecName: "utilities") pod "5e07f374-c82a-4782-9bc0-9e289df776bf" (UID: "5e07f374-c82a-4782-9bc0-9e289df776bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.667233 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c" (OuterVolumeSpecName: "kube-api-access-lks7c") pod "5e07f374-c82a-4782-9bc0-9e289df776bf" (UID: "5e07f374-c82a-4782-9bc0-9e289df776bf"). InnerVolumeSpecName "kube-api-access-lks7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.715791 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e07f374-c82a-4782-9bc0-9e289df776bf" (UID: "5e07f374-c82a-4782-9bc0-9e289df776bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.757531 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.757574 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e07f374-c82a-4782-9bc0-9e289df776bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:38 crc kubenswrapper[4988]: I1123 07:00:38.757589 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lks7c\" (UniqueName: \"kubernetes.io/projected/5e07f374-c82a-4782-9bc0-9e289df776bf-kube-api-access-lks7c\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.157846 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l95c9" event={"ID":"7664e29b-309a-4f06-bfd5-6fc10d70479e","Type":"ContainerStarted","Data":"8b335aabff08c31a85db850b8bf655c1f791564b2a411e211b9bc38ff4577b29"} Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.159920 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.160907 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerID="26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2" exitCode=0 Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.160964 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerDied","Data":"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2"} Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.160994 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvmz7" event={"ID":"5e07f374-c82a-4782-9bc0-9e289df776bf","Type":"ContainerDied","Data":"6548af36f279e73105ec81caac42140bd8e46fb6c17f0d96b3f0543385517583"} Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.161017 4988 scope.go:117] "RemoveContainer" containerID="26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.161047 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvmz7" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.191521 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-l95c9" podStartSLOduration=6.25184744 podStartE2EDuration="15.191488913s" podCreationTimestamp="2025-11-23 07:00:24 +0000 UTC" firstStartedPulling="2025-11-23 07:00:24.600327995 +0000 UTC m=+876.908840758" lastFinishedPulling="2025-11-23 07:00:33.539969468 +0000 UTC m=+885.848482231" observedRunningTime="2025-11-23 07:00:39.185909041 +0000 UTC m=+891.494421814" watchObservedRunningTime="2025-11-23 07:00:39.191488913 +0000 UTC m=+891.500001676" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.200546 4988 scope.go:117] "RemoveContainer" containerID="0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.202316 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.205788 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvmz7"] Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.232060 4988 scope.go:117] "RemoveContainer" containerID="b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.261730 4988 scope.go:117] "RemoveContainer" containerID="26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2" Nov 23 07:00:39 crc kubenswrapper[4988]: E1123 07:00:39.262282 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2\": container with ID starting with 26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2 not found: ID does not exist" containerID="26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.262321 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2"} err="failed to get container status \"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2\": rpc error: code = NotFound desc = could not find container \"26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2\": container with ID starting with 26843cfc241c8761c530714b7d562abcccbe5bc426b22001e7c74ade97197ae2 not found: ID does not exist" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.262354 4988 scope.go:117] "RemoveContainer" containerID="0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2" Nov 23 07:00:39 crc kubenswrapper[4988]: E1123 07:00:39.262706 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2\": container with ID starting with 0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2 not found: ID does not exist" containerID="0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.262755 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2"} err="failed to get container status \"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2\": rpc error: code = NotFound desc = could not find container \"0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2\": container with ID starting with 0ef7c8ed47595e663127192d394fbeaa54da057769d01596f0a2defe249003a2 not found: ID does not exist" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.262781 4988 scope.go:117] "RemoveContainer" containerID="b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b" Nov 23 07:00:39 crc kubenswrapper[4988]: E1123 07:00:39.263577 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b\": container with ID starting with b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b not found: ID does not exist" containerID="b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.263618 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b"} err="failed to get container status \"b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b\": rpc error: code = NotFound desc = could not find container \"b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b\": container with ID starting with b4865c021cd1ab4cbf3a4125aa1147c932562c1ee48c2fa3dcd1d2d138e4aa5b not found: ID does not exist" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.405314 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:39 crc kubenswrapper[4988]: I1123 07:00:39.478289 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:40 crc kubenswrapper[4988]: I1123 07:00:40.504487 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" path="/var/lib/kubelet/pods/5e07f374-c82a-4782-9bc0-9e289df776bf/volumes" Nov 23 07:00:42 crc kubenswrapper[4988]: I1123 07:00:42.126844 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:42 crc kubenswrapper[4988]: I1123 07:00:42.126897 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:42 crc kubenswrapper[4988]: I1123 07:00:42.183801 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:42 crc kubenswrapper[4988]: I1123 07:00:42.240942 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:42 crc kubenswrapper[4988]: I1123 07:00:42.631955 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.195434 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xmht" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="registry-server" containerID="cri-o://1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad" gracePeriod=2 Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.401351 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6thck" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.626077 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.747996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bct\" (UniqueName: \"kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct\") pod \"25b6e75e-25a0-4942-969c-b2a99a266373\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.748078 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities\") pod \"25b6e75e-25a0-4942-969c-b2a99a266373\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.748135 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content\") pod \"25b6e75e-25a0-4942-969c-b2a99a266373\" (UID: \"25b6e75e-25a0-4942-969c-b2a99a266373\") " Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.749145 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities" (OuterVolumeSpecName: "utilities") pod "25b6e75e-25a0-4942-969c-b2a99a266373" (UID: "25b6e75e-25a0-4942-969c-b2a99a266373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.753749 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct" (OuterVolumeSpecName: "kube-api-access-s4bct") pod "25b6e75e-25a0-4942-969c-b2a99a266373" (UID: "25b6e75e-25a0-4942-969c-b2a99a266373"). InnerVolumeSpecName "kube-api-access-s4bct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.808926 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25b6e75e-25a0-4942-969c-b2a99a266373" (UID: "25b6e75e-25a0-4942-969c-b2a99a266373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.850222 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bct\" (UniqueName: \"kubernetes.io/projected/25b6e75e-25a0-4942-969c-b2a99a266373-kube-api-access-s4bct\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.850737 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:44 crc kubenswrapper[4988]: I1123 07:00:44.850870 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b6e75e-25a0-4942-969c-b2a99a266373-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.204849 4988 generic.go:334] "Generic (PLEG): container finished" podID="25b6e75e-25a0-4942-969c-b2a99a266373" containerID="1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad" exitCode=0 Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.204901 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerDied","Data":"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad"} Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.204911 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xmht" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.204936 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xmht" event={"ID":"25b6e75e-25a0-4942-969c-b2a99a266373","Type":"ContainerDied","Data":"1f1446e5ab5bafa4c35f38e75f7b5557f4b49a04ca354dcc5b549d5c53281808"} Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.204955 4988 scope.go:117] "RemoveContainer" containerID="1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.247251 4988 scope.go:117] "RemoveContainer" containerID="2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.266288 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.271519 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xmht"] Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.281464 4988 scope.go:117] "RemoveContainer" containerID="52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.299377 4988 scope.go:117] "RemoveContainer" containerID="1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad" Nov 23 07:00:45 crc kubenswrapper[4988]: E1123 07:00:45.300011 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad\": container with ID starting with 1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad not found: ID does not exist" containerID="1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.300082 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad"} err="failed to get container status \"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad\": rpc error: code = NotFound desc = could not find container \"1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad\": container with ID starting with 1c29e43b86b1cfb89f36c36445984d93173df00554a40474fe8dde993e1376ad not found: ID does not exist" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.300141 4988 scope.go:117] "RemoveContainer" containerID="2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051" Nov 23 07:00:45 crc kubenswrapper[4988]: E1123 07:00:45.300597 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051\": container with ID starting with 2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051 not found: ID does not exist" containerID="2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.300639 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051"} err="failed to get container status \"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051\": rpc error: code = NotFound desc = could not find container \"2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051\": container with ID starting with 2e188e6f3809ec3573d3edefd3721b9965503f462ce611865f2ce13a7b3d8051 not found: ID does not exist" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.300668 4988 scope.go:117] "RemoveContainer" containerID="52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736" Nov 23 07:00:45 crc kubenswrapper[4988]: E1123 07:00:45.301029 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736\": container with ID starting with 52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736 not found: ID does not exist" containerID="52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.301065 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736"} err="failed to get container status \"52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736\": rpc error: code = NotFound desc = could not find container \"52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736\": container with ID starting with 52261069871d2e4c94c3d23f15e48d01e62eb8f6d384139642fa33d4db6ea736 not found: ID does not exist" Nov 23 07:00:45 crc kubenswrapper[4988]: I1123 07:00:45.977787 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6flql" Nov 23 07:00:46 crc kubenswrapper[4988]: I1123 07:00:46.504344 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" path="/var/lib/kubelet/pods/25b6e75e-25a0-4942-969c-b2a99a266373/volumes" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.490950 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh"] Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491264 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491289 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491328 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="extract-utilities" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491340 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="extract-utilities" Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491359 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491368 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491378 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="extract-content" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491386 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="extract-content" Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491402 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="extract-utilities" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491410 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="extract-utilities" Nov 23 07:00:47 crc kubenswrapper[4988]: E1123 07:00:47.491429 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="extract-content" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491437 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="extract-content" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491566 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e07f374-c82a-4782-9bc0-9e289df776bf" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.491583 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b6e75e-25a0-4942-969c-b2a99a266373" containerName="registry-server" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.492611 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.494573 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.511929 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh"] Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.691460 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.691517 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmsv\" (UniqueName: \"kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.691561 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.792341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.792399 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bmsv\" (UniqueName: \"kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.792439 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.792903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.792949 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.815091 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bmsv\" (UniqueName: \"kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:47 crc kubenswrapper[4988]: I1123 07:00:47.815459 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:48 crc kubenswrapper[4988]: I1123 07:00:48.238555 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh"] Nov 23 07:00:49 crc kubenswrapper[4988]: I1123 07:00:49.232924 4988 generic.go:334] "Generic (PLEG): container finished" podID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerID="93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455" exitCode=0 Nov 23 07:00:49 crc kubenswrapper[4988]: I1123 07:00:49.233070 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" event={"ID":"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f","Type":"ContainerDied","Data":"93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455"} Nov 23 07:00:49 crc kubenswrapper[4988]: I1123 07:00:49.233513 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" event={"ID":"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f","Type":"ContainerStarted","Data":"fe7f11b583c2893c350c8ebf9e47062ab03294551df5889e93d958f4790e55ab"} Nov 23 07:00:51 crc kubenswrapper[4988]: E1123 07:00:51.466398 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:00:51 crc kubenswrapper[4988]: I1123 07:00:51.672930 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:00:51 crc kubenswrapper[4988]: I1123 07:00:51.672997 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:00:52 crc kubenswrapper[4988]: I1123 07:00:52.253992 4988 generic.go:334] "Generic (PLEG): container finished" podID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerID="63e59e0354cd950633d00690df9816aaaba78ebc803b41fffda9d9b69a2c9afb" exitCode=0 Nov 23 07:00:52 crc kubenswrapper[4988]: I1123 07:00:52.254024 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" event={"ID":"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f","Type":"ContainerDied","Data":"63e59e0354cd950633d00690df9816aaaba78ebc803b41fffda9d9b69a2c9afb"} Nov 23 07:00:53 crc kubenswrapper[4988]: I1123 07:00:53.261302 4988 generic.go:334] "Generic (PLEG): container finished" podID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerID="302f7cd053677551316bb32a3596af62eaafa791a0822a1fa46ed3acdc08fd44" exitCode=0 Nov 23 07:00:53 crc kubenswrapper[4988]: I1123 07:00:53.261375 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" event={"ID":"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f","Type":"ContainerDied","Data":"302f7cd053677551316bb32a3596af62eaafa791a0822a1fa46ed3acdc08fd44"} Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.408438 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-l95c9" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.537319 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.602037 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util\") pod \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.602110 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bmsv\" (UniqueName: \"kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv\") pod \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.602153 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle\") pod \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\" (UID: \"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f\") " Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.603400 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle" (OuterVolumeSpecName: "bundle") pod "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" (UID: "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.604990 4988 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.616445 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv" (OuterVolumeSpecName: "kube-api-access-5bmsv") pod "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" (UID: "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f"). InnerVolumeSpecName "kube-api-access-5bmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.621027 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util" (OuterVolumeSpecName: "util") pod "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" (UID: "9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.706894 4988 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:54 crc kubenswrapper[4988]: I1123 07:00:54.706944 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bmsv\" (UniqueName: \"kubernetes.io/projected/9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f-kube-api-access-5bmsv\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:55 crc kubenswrapper[4988]: I1123 07:00:55.278499 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" event={"ID":"9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f","Type":"ContainerDied","Data":"fe7f11b583c2893c350c8ebf9e47062ab03294551df5889e93d958f4790e55ab"} Nov 23 07:00:55 crc kubenswrapper[4988]: I1123 07:00:55.279112 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7f11b583c2893c350c8ebf9e47062ab03294551df5889e93d958f4790e55ab" Nov 23 07:00:55 crc kubenswrapper[4988]: I1123 07:00:55.278598 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.505591 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n"] Nov 23 07:00:57 crc kubenswrapper[4988]: E1123 07:00:57.505877 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="pull" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.505894 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="pull" Nov 23 07:00:57 crc kubenswrapper[4988]: E1123 07:00:57.505911 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="extract" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.505918 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="extract" Nov 23 07:00:57 crc kubenswrapper[4988]: E1123 07:00:57.505937 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="util" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.505945 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="util" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.506062 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f" containerName="extract" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.506567 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.509330 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.509500 4988 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bj7kx" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.509909 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.520054 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n"] Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.549039 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cll82\" (UniqueName: \"kubernetes.io/projected/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-kube-api-access-cll82\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.549136 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.650241 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.650323 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cll82\" (UniqueName: \"kubernetes.io/projected/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-kube-api-access-cll82\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.651072 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.672418 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cll82\" (UniqueName: \"kubernetes.io/projected/b5d5d6f5-18f8-4187-bb76-513b38c6bf72-kube-api-access-cll82\") pod \"cert-manager-operator-controller-manager-64cf6dff88-tg82n\" (UID: \"b5d5d6f5-18f8-4187-bb76-513b38c6bf72\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:57 crc kubenswrapper[4988]: I1123 07:00:57.821653 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" Nov 23 07:00:58 crc kubenswrapper[4988]: I1123 07:00:58.285551 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n"] Nov 23 07:00:58 crc kubenswrapper[4988]: W1123 07:00:58.295362 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d5d6f5_18f8_4187_bb76_513b38c6bf72.slice/crio-72b9ba756b03bd80f3ce7f8832e053408093f0e2b5d8cdf31902d2efdc7e20d4 WatchSource:0}: Error finding container 72b9ba756b03bd80f3ce7f8832e053408093f0e2b5d8cdf31902d2efdc7e20d4: Status 404 returned error can't find the container with id 72b9ba756b03bd80f3ce7f8832e053408093f0e2b5d8cdf31902d2efdc7e20d4 Nov 23 07:00:59 crc kubenswrapper[4988]: I1123 07:00:59.305786 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" event={"ID":"b5d5d6f5-18f8-4187-bb76-513b38c6bf72","Type":"ContainerStarted","Data":"72b9ba756b03bd80f3ce7f8832e053408093f0e2b5d8cdf31902d2efdc7e20d4"} Nov 23 07:01:01 crc kubenswrapper[4988]: E1123 07:01:01.624240 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:06 crc kubenswrapper[4988]: I1123 07:01:06.362901 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" event={"ID":"b5d5d6f5-18f8-4187-bb76-513b38c6bf72","Type":"ContainerStarted","Data":"26801a548f12c0bb7e1d55f2cc6f597d853032e0d31ba7ba26cd9fe1403fc21b"} Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.651796 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-tg82n" podStartSLOduration=5.730626125 podStartE2EDuration="12.651776914s" podCreationTimestamp="2025-11-23 07:00:57 +0000 UTC" firstStartedPulling="2025-11-23 07:00:58.296926622 +0000 UTC m=+910.605439385" lastFinishedPulling="2025-11-23 07:01:05.218077411 +0000 UTC m=+917.526590174" observedRunningTime="2025-11-23 07:01:06.396628445 +0000 UTC m=+918.705141208" watchObservedRunningTime="2025-11-23 07:01:09.651776914 +0000 UTC m=+921.960289677" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.652611 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn"] Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.653559 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.657607 4988 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mcvhv" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.660565 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.660672 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.668832 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn"] Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.735028 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.735093 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kljkb\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-kube-api-access-kljkb\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.836942 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kljkb\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-kube-api-access-kljkb\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.837096 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.858988 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kljkb\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-kube-api-access-kljkb\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.860888 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cc0e918-fd10-4cc3-b406-3c7dd4e31945-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-rxnkn\" (UID: \"3cc0e918-fd10-4cc3-b406-3c7dd4e31945\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:09 crc kubenswrapper[4988]: I1123 07:01:09.969099 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" Nov 23 07:01:10 crc kubenswrapper[4988]: I1123 07:01:10.392762 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn"] Nov 23 07:01:11 crc kubenswrapper[4988]: I1123 07:01:11.400546 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" event={"ID":"3cc0e918-fd10-4cc3-b406-3c7dd4e31945","Type":"ContainerStarted","Data":"cfb2d06c41a87aa6606738092ec6ef3ab0efac5722e9488ca596b1f35c9df822"} Nov 23 07:01:11 crc kubenswrapper[4988]: E1123 07:01:11.785013 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.179997 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvznd"] Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.181767 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.186098 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvznd"] Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.187043 4988 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-f984z" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.218456 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.218624 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhxf\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-kube-api-access-pbhxf\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.320621 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.321355 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhxf\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-kube-api-access-pbhxf\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.340605 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.357556 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhxf\" (UniqueName: \"kubernetes.io/projected/5cf554b8-f67a-43b4-83a0-c8c1819552a5-kube-api-access-pbhxf\") pod \"cert-manager-webhook-f4fb5df64-cvznd\" (UID: \"5cf554b8-f67a-43b4-83a0-c8c1819552a5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:15 crc kubenswrapper[4988]: I1123 07:01:15.501625 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.002513 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cvznd"] Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.464243 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" event={"ID":"5cf554b8-f67a-43b4-83a0-c8c1819552a5","Type":"ContainerStarted","Data":"1be982496da27119a8b671fc9dedc73ed9a83f17d028b1748f1184c70081b8b1"} Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.464675 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" event={"ID":"5cf554b8-f67a-43b4-83a0-c8c1819552a5","Type":"ContainerStarted","Data":"ad2083ff651b30f3f6f60783287d46f14fa5f073e81fba49fa48d900dacfc579"} Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.464693 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.466150 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" event={"ID":"3cc0e918-fd10-4cc3-b406-3c7dd4e31945","Type":"ContainerStarted","Data":"128e58020afa66c1796b8231aa1da716a9e0d904bd96696827e2adbabf932556"} Nov 23 07:01:18 crc kubenswrapper[4988]: I1123 07:01:18.479838 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" podStartSLOduration=3.479820971 podStartE2EDuration="3.479820971s" podCreationTimestamp="2025-11-23 07:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:18.479139264 +0000 UTC m=+930.787652027" watchObservedRunningTime="2025-11-23 07:01:18.479820971 +0000 UTC m=+930.788333744" Nov 23 07:01:21 crc kubenswrapper[4988]: I1123 07:01:21.672479 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:01:21 crc kubenswrapper[4988]: I1123 07:01:21.672935 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:01:21 crc kubenswrapper[4988]: E1123 07:01:21.957105 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:25 crc kubenswrapper[4988]: I1123 07:01:25.505474 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-cvznd" Nov 23 07:01:25 crc kubenswrapper[4988]: I1123 07:01:25.535471 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-rxnkn" podStartSLOduration=9.036383645 podStartE2EDuration="16.535446564s" podCreationTimestamp="2025-11-23 07:01:09 +0000 UTC" firstStartedPulling="2025-11-23 07:01:10.418518374 +0000 UTC m=+922.727031137" lastFinishedPulling="2025-11-23 07:01:17.917581293 +0000 UTC m=+930.226094056" observedRunningTime="2025-11-23 07:01:18.503654633 +0000 UTC m=+930.812167396" watchObservedRunningTime="2025-11-23 07:01:25.535446564 +0000 UTC m=+937.843959367" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.532487 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-49vd4"] Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.533790 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.535643 4988 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-crgdg" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.545108 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-49vd4"] Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.616285 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhl6\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-kube-api-access-7nhl6\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.616766 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-bound-sa-token\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.717557 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhl6\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-kube-api-access-7nhl6\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.717611 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-bound-sa-token\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.741126 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-bound-sa-token\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.745163 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhl6\" (UniqueName: \"kubernetes.io/projected/7423cc4d-808d-439a-b38c-106914e44da4-kube-api-access-7nhl6\") pod \"cert-manager-86cb77c54b-49vd4\" (UID: \"7423cc4d-808d-439a-b38c-106914e44da4\") " pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:28 crc kubenswrapper[4988]: I1123 07:01:28.857409 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-49vd4" Nov 23 07:01:29 crc kubenswrapper[4988]: I1123 07:01:29.158931 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-49vd4"] Nov 23 07:01:29 crc kubenswrapper[4988]: I1123 07:01:29.557399 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-49vd4" event={"ID":"7423cc4d-808d-439a-b38c-106914e44da4","Type":"ContainerStarted","Data":"17a204663c7a6b6a6b1b2cfb289c08d972650adb3b959e4ef0e0c9b847c9418d"} Nov 23 07:01:29 crc kubenswrapper[4988]: I1123 07:01:29.557474 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-49vd4" event={"ID":"7423cc4d-808d-439a-b38c-106914e44da4","Type":"ContainerStarted","Data":"ab5a2f7f0f4be36bbb6c087315cf34b7cff803bc8ce318d9a4be286dda42df3e"} Nov 23 07:01:29 crc kubenswrapper[4988]: I1123 07:01:29.576271 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-49vd4" podStartSLOduration=1.576237838 podStartE2EDuration="1.576237838s" podCreationTimestamp="2025-11-23 07:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:01:29.573108717 +0000 UTC m=+941.881621520" watchObservedRunningTime="2025-11-23 07:01:29.576237838 +0000 UTC m=+941.884750641" Nov 23 07:01:32 crc kubenswrapper[4988]: E1123 07:01:32.127480 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.393043 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.394945 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.397943 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.397981 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-l244w" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.403596 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.420132 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.490481 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c484t\" (UniqueName: \"kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t\") pod \"openstack-operator-index-gxpbg\" (UID: \"1b516e77-2ceb-4771-9b84-755110ce8085\") " pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.592596 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c484t\" (UniqueName: \"kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t\") pod \"openstack-operator-index-gxpbg\" (UID: \"1b516e77-2ceb-4771-9b84-755110ce8085\") " pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.615019 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c484t\" (UniqueName: \"kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t\") pod \"openstack-operator-index-gxpbg\" (UID: \"1b516e77-2ceb-4771-9b84-755110ce8085\") " pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.732127 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:39 crc kubenswrapper[4988]: I1123 07:01:39.945735 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:40 crc kubenswrapper[4988]: I1123 07:01:40.642898 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxpbg" event={"ID":"1b516e77-2ceb-4771-9b84-755110ce8085","Type":"ContainerStarted","Data":"6285737cdca6b78fcee4e2a1e368204d39dd26a9e1e44d1290522fd4482ab19f"} Nov 23 07:01:41 crc kubenswrapper[4988]: I1123 07:01:41.654055 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxpbg" event={"ID":"1b516e77-2ceb-4771-9b84-755110ce8085","Type":"ContainerStarted","Data":"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6"} Nov 23 07:01:41 crc kubenswrapper[4988]: I1123 07:01:41.689404 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gxpbg" podStartSLOduration=1.97387369 podStartE2EDuration="2.689380613s" podCreationTimestamp="2025-11-23 07:01:39 +0000 UTC" firstStartedPulling="2025-11-23 07:01:39.963978317 +0000 UTC m=+952.272491080" lastFinishedPulling="2025-11-23 07:01:40.67948523 +0000 UTC m=+952.987998003" observedRunningTime="2025-11-23 07:01:41.678188276 +0000 UTC m=+953.986701079" watchObservedRunningTime="2025-11-23 07:01:41.689380613 +0000 UTC m=+953.997893386" Nov 23 07:01:42 crc kubenswrapper[4988]: E1123 07:01:42.282525 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e3d6293_1a2e_4ca5_84a9_26c341a9ea6f.slice/crio-conmon-93f37f180860bb6a9827ce918416b18a2d80aac0417375ee84b3b7ab6310a455.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:01:42 crc kubenswrapper[4988]: I1123 07:01:42.743021 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.357745 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xfz5r"] Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.359579 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.370034 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xfz5r"] Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.480006 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9hq\" (UniqueName: \"kubernetes.io/projected/723ca5e4-78e1-4e98-af52-13b7f8325692-kube-api-access-2v9hq\") pod \"openstack-operator-index-xfz5r\" (UID: \"723ca5e4-78e1-4e98-af52-13b7f8325692\") " pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.580872 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9hq\" (UniqueName: \"kubernetes.io/projected/723ca5e4-78e1-4e98-af52-13b7f8325692-kube-api-access-2v9hq\") pod \"openstack-operator-index-xfz5r\" (UID: \"723ca5e4-78e1-4e98-af52-13b7f8325692\") " pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.606764 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9hq\" (UniqueName: \"kubernetes.io/projected/723ca5e4-78e1-4e98-af52-13b7f8325692-kube-api-access-2v9hq\") pod \"openstack-operator-index-xfz5r\" (UID: \"723ca5e4-78e1-4e98-af52-13b7f8325692\") " pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.668033 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gxpbg" podUID="1b516e77-2ceb-4771-9b84-755110ce8085" containerName="registry-server" containerID="cri-o://3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6" gracePeriod=2 Nov 23 07:01:43 crc kubenswrapper[4988]: I1123 07:01:43.683764 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.091477 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.151718 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xfz5r"] Nov 23 07:01:44 crc kubenswrapper[4988]: W1123 07:01:44.156768 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723ca5e4_78e1_4e98_af52_13b7f8325692.slice/crio-5506151cb28608c76fbb3d8e4843d7f73ae8373b01ff4f32a883574a12c43955 WatchSource:0}: Error finding container 5506151cb28608c76fbb3d8e4843d7f73ae8373b01ff4f32a883574a12c43955: Status 404 returned error can't find the container with id 5506151cb28608c76fbb3d8e4843d7f73ae8373b01ff4f32a883574a12c43955 Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.191029 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c484t\" (UniqueName: \"kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t\") pod \"1b516e77-2ceb-4771-9b84-755110ce8085\" (UID: \"1b516e77-2ceb-4771-9b84-755110ce8085\") " Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.195358 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t" (OuterVolumeSpecName: "kube-api-access-c484t") pod "1b516e77-2ceb-4771-9b84-755110ce8085" (UID: "1b516e77-2ceb-4771-9b84-755110ce8085"). InnerVolumeSpecName "kube-api-access-c484t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.292932 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c484t\" (UniqueName: \"kubernetes.io/projected/1b516e77-2ceb-4771-9b84-755110ce8085-kube-api-access-c484t\") on node \"crc\" DevicePath \"\"" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.677610 4988 generic.go:334] "Generic (PLEG): container finished" podID="1b516e77-2ceb-4771-9b84-755110ce8085" containerID="3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6" exitCode=0 Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.677685 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxpbg" event={"ID":"1b516e77-2ceb-4771-9b84-755110ce8085","Type":"ContainerDied","Data":"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6"} Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.677741 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gxpbg" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.678121 4988 scope.go:117] "RemoveContainer" containerID="3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.678100 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gxpbg" event={"ID":"1b516e77-2ceb-4771-9b84-755110ce8085","Type":"ContainerDied","Data":"6285737cdca6b78fcee4e2a1e368204d39dd26a9e1e44d1290522fd4482ab19f"} Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.682143 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xfz5r" event={"ID":"723ca5e4-78e1-4e98-af52-13b7f8325692","Type":"ContainerStarted","Data":"5506151cb28608c76fbb3d8e4843d7f73ae8373b01ff4f32a883574a12c43955"} Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.707409 4988 scope.go:117] "RemoveContainer" containerID="3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6" Nov 23 07:01:44 crc kubenswrapper[4988]: E1123 07:01:44.716656 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6\": container with ID starting with 3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6 not found: ID does not exist" containerID="3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.716736 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6"} err="failed to get container status \"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6\": rpc error: code = NotFound desc = could not find container \"3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6\": container with ID starting with 3a8de3e3fcc9eb33a4c761da5a1f159a458908adf0f41c2663b4422d42f69df6 not found: ID does not exist" Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.719083 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:44 crc kubenswrapper[4988]: I1123 07:01:44.723611 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gxpbg"] Nov 23 07:01:45 crc kubenswrapper[4988]: I1123 07:01:45.694004 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xfz5r" event={"ID":"723ca5e4-78e1-4e98-af52-13b7f8325692","Type":"ContainerStarted","Data":"6a43c5ccaba3516a6d238e7995c02d8bd89c14092a08e352e98a1048c721db2b"} Nov 23 07:01:46 crc kubenswrapper[4988]: I1123 07:01:46.509121 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b516e77-2ceb-4771-9b84-755110ce8085" path="/var/lib/kubelet/pods/1b516e77-2ceb-4771-9b84-755110ce8085/volumes" Nov 23 07:01:51 crc kubenswrapper[4988]: I1123 07:01:51.671973 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:01:51 crc kubenswrapper[4988]: I1123 07:01:51.672791 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:01:51 crc kubenswrapper[4988]: I1123 07:01:51.672862 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:01:51 crc kubenswrapper[4988]: I1123 07:01:51.673724 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:01:51 crc kubenswrapper[4988]: I1123 07:01:51.673814 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf" gracePeriod=600 Nov 23 07:01:52 crc kubenswrapper[4988]: I1123 07:01:52.752615 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf" exitCode=0 Nov 23 07:01:52 crc kubenswrapper[4988]: I1123 07:01:52.752652 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf"} Nov 23 07:01:52 crc kubenswrapper[4988]: I1123 07:01:52.753616 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1"} Nov 23 07:01:52 crc kubenswrapper[4988]: I1123 07:01:52.753648 4988 scope.go:117] "RemoveContainer" containerID="ae910c0fb450e025f230121941feb80702360a059115278b81abe53533b952bb" Nov 23 07:01:52 crc kubenswrapper[4988]: I1123 07:01:52.783551 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xfz5r" podStartSLOduration=9.335203909 podStartE2EDuration="9.783523682s" podCreationTimestamp="2025-11-23 07:01:43 +0000 UTC" firstStartedPulling="2025-11-23 07:01:44.161185447 +0000 UTC m=+956.469698220" lastFinishedPulling="2025-11-23 07:01:44.60950523 +0000 UTC m=+956.918017993" observedRunningTime="2025-11-23 07:01:45.714521836 +0000 UTC m=+958.023034609" watchObservedRunningTime="2025-11-23 07:01:52.783523682 +0000 UTC m=+965.092036445" Nov 23 07:01:53 crc kubenswrapper[4988]: I1123 07:01:53.685013 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:53 crc kubenswrapper[4988]: I1123 07:01:53.685535 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:53 crc kubenswrapper[4988]: I1123 07:01:53.725043 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:53 crc kubenswrapper[4988]: I1123 07:01:53.794034 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xfz5r" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.211780 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k"] Nov 23 07:01:56 crc kubenswrapper[4988]: E1123 07:01:56.212697 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b516e77-2ceb-4771-9b84-755110ce8085" containerName="registry-server" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.212717 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b516e77-2ceb-4771-9b84-755110ce8085" containerName="registry-server" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.212935 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b516e77-2ceb-4771-9b84-755110ce8085" containerName="registry-server" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.214078 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.221141 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7mrds" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.222117 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k"] Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.373689 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5vp\" (UniqueName: \"kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.373779 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.373887 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.475271 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5vp\" (UniqueName: \"kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.475340 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.476478 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.477065 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.479584 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.500871 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5vp\" (UniqueName: \"kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.538672 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.767695 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k"] Nov 23 07:01:56 crc kubenswrapper[4988]: I1123 07:01:56.789261 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" event={"ID":"5c1af7b2-e795-4304-bbde-e70be9ae1520","Type":"ContainerStarted","Data":"d865d5db59152294e756799bf07b6509f2f435bfad95b026c910fc8f1256714f"} Nov 23 07:01:57 crc kubenswrapper[4988]: I1123 07:01:57.800711 4988 generic.go:334] "Generic (PLEG): container finished" podID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerID="12eb78f2bb900c26270478c461dcbd97abbe53214ee945fdd6f7d43f7e44a7f9" exitCode=0 Nov 23 07:01:57 crc kubenswrapper[4988]: I1123 07:01:57.800792 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" event={"ID":"5c1af7b2-e795-4304-bbde-e70be9ae1520","Type":"ContainerDied","Data":"12eb78f2bb900c26270478c461dcbd97abbe53214ee945fdd6f7d43f7e44a7f9"} Nov 23 07:01:58 crc kubenswrapper[4988]: I1123 07:01:58.811306 4988 generic.go:334] "Generic (PLEG): container finished" podID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerID="10db8ec94fd76fb452b4cabab0656421ef5c4b7fa4e364106e4bccfbdc17ddf3" exitCode=0 Nov 23 07:01:58 crc kubenswrapper[4988]: I1123 07:01:58.812265 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" event={"ID":"5c1af7b2-e795-4304-bbde-e70be9ae1520","Type":"ContainerDied","Data":"10db8ec94fd76fb452b4cabab0656421ef5c4b7fa4e364106e4bccfbdc17ddf3"} Nov 23 07:01:59 crc kubenswrapper[4988]: I1123 07:01:59.820768 4988 generic.go:334] "Generic (PLEG): container finished" podID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerID="0c4e7f88f2086866379568eb974193ed91daec3d009ce0bdda0d56fcab6d4c71" exitCode=0 Nov 23 07:01:59 crc kubenswrapper[4988]: I1123 07:01:59.820828 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" event={"ID":"5c1af7b2-e795-4304-bbde-e70be9ae1520","Type":"ContainerDied","Data":"0c4e7f88f2086866379568eb974193ed91daec3d009ce0bdda0d56fcab6d4c71"} Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.211682 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.359253 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle\") pod \"5c1af7b2-e795-4304-bbde-e70be9ae1520\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.359487 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5vp\" (UniqueName: \"kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp\") pod \"5c1af7b2-e795-4304-bbde-e70be9ae1520\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.359623 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util\") pod \"5c1af7b2-e795-4304-bbde-e70be9ae1520\" (UID: \"5c1af7b2-e795-4304-bbde-e70be9ae1520\") " Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.361259 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle" (OuterVolumeSpecName: "bundle") pod "5c1af7b2-e795-4304-bbde-e70be9ae1520" (UID: "5c1af7b2-e795-4304-bbde-e70be9ae1520"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.369027 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp" (OuterVolumeSpecName: "kube-api-access-zl5vp") pod "5c1af7b2-e795-4304-bbde-e70be9ae1520" (UID: "5c1af7b2-e795-4304-bbde-e70be9ae1520"). InnerVolumeSpecName "kube-api-access-zl5vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.379338 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util" (OuterVolumeSpecName: "util") pod "5c1af7b2-e795-4304-bbde-e70be9ae1520" (UID: "5c1af7b2-e795-4304-bbde-e70be9ae1520"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.461452 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5vp\" (UniqueName: \"kubernetes.io/projected/5c1af7b2-e795-4304-bbde-e70be9ae1520-kube-api-access-zl5vp\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.461518 4988 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.461539 4988 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c1af7b2-e795-4304-bbde-e70be9ae1520-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.838581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" event={"ID":"5c1af7b2-e795-4304-bbde-e70be9ae1520","Type":"ContainerDied","Data":"d865d5db59152294e756799bf07b6509f2f435bfad95b026c910fc8f1256714f"} Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.838675 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d865d5db59152294e756799bf07b6509f2f435bfad95b026c910fc8f1256714f" Nov 23 07:02:01 crc kubenswrapper[4988]: I1123 07:02:01.839280 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.107917 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss"] Nov 23 07:02:09 crc kubenswrapper[4988]: E1123 07:02:09.108792 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="util" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.108804 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="util" Nov 23 07:02:09 crc kubenswrapper[4988]: E1123 07:02:09.108813 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="pull" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.108818 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="pull" Nov 23 07:02:09 crc kubenswrapper[4988]: E1123 07:02:09.108832 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="extract" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.108839 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="extract" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.108954 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c1af7b2-e795-4304-bbde-e70be9ae1520" containerName="extract" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.109562 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.114644 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-z48q2" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.282617 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss"] Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.291068 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq6n\" (UniqueName: \"kubernetes.io/projected/82239a25-26cf-4cfd-b41a-c427b2289d76-kube-api-access-lxq6n\") pod \"openstack-operator-controller-operator-8486c7f98b-vvsss\" (UID: \"82239a25-26cf-4cfd-b41a-c427b2289d76\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.392015 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxq6n\" (UniqueName: \"kubernetes.io/projected/82239a25-26cf-4cfd-b41a-c427b2289d76-kube-api-access-lxq6n\") pod \"openstack-operator-controller-operator-8486c7f98b-vvsss\" (UID: \"82239a25-26cf-4cfd-b41a-c427b2289d76\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.418818 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxq6n\" (UniqueName: \"kubernetes.io/projected/82239a25-26cf-4cfd-b41a-c427b2289d76-kube-api-access-lxq6n\") pod \"openstack-operator-controller-operator-8486c7f98b-vvsss\" (UID: \"82239a25-26cf-4cfd-b41a-c427b2289d76\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.427186 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.710307 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss"] Nov 23 07:02:09 crc kubenswrapper[4988]: I1123 07:02:09.908716 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" event={"ID":"82239a25-26cf-4cfd-b41a-c427b2289d76","Type":"ContainerStarted","Data":"c8a0881cc84952dc66a73309cca72f9ed009d0d072ae39527d97175c87dd1df6"} Nov 23 07:02:14 crc kubenswrapper[4988]: I1123 07:02:14.945011 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" event={"ID":"82239a25-26cf-4cfd-b41a-c427b2289d76","Type":"ContainerStarted","Data":"26ad5f5a2209e58f8c81a0b553e9ca3b8edb957fe6f6b4681aaf48d182069821"} Nov 23 07:02:17 crc kubenswrapper[4988]: I1123 07:02:17.995990 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" event={"ID":"82239a25-26cf-4cfd-b41a-c427b2289d76","Type":"ContainerStarted","Data":"31cab4ac44a5b834d51bf964e5cdef419167508c88f0f2f762db4111814d5f58"} Nov 23 07:02:18 crc kubenswrapper[4988]: I1123 07:02:18.042684 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" podStartSLOduration=1.5411909270000002 podStartE2EDuration="9.042658918s" podCreationTimestamp="2025-11-23 07:02:09 +0000 UTC" firstStartedPulling="2025-11-23 07:02:09.721476981 +0000 UTC m=+982.029989744" lastFinishedPulling="2025-11-23 07:02:17.222944972 +0000 UTC m=+989.531457735" observedRunningTime="2025-11-23 07:02:18.036910173 +0000 UTC m=+990.345422976" watchObservedRunningTime="2025-11-23 07:02:18.042658918 +0000 UTC m=+990.351171721" Nov 23 07:02:19 crc kubenswrapper[4988]: I1123 07:02:19.001996 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:19 crc kubenswrapper[4988]: I1123 07:02:19.004617 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-vvsss" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.930170 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2"] Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.932549 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.937012 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch"] Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.938111 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.939685 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jfxt4" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.940258 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ck2nr" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.941813 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2"] Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.962045 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch"] Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.983817 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5"] Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.986275 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:02:35 crc kubenswrapper[4988]: I1123 07:02:35.987896 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mtgll" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.011100 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.040784 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.042279 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.045886 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rhf8q" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.046074 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.047094 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.049588 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dqh28" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.051385 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.052376 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.054257 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-pdk2w" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.067817 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.093473 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.100187 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.102399 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47d6\" (UniqueName: \"kubernetes.io/projected/08f2612c-cf4d-42a5-81df-238887b3e77d-kube-api-access-t47d6\") pod \"cinder-operator-controller-manager-6d8fd67bf7-jvdch\" (UID: \"08f2612c-cf4d-42a5-81df-238887b3e77d\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.102919 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2r4\" (UniqueName: \"kubernetes.io/projected/9fe52e6b-130d-42a1-b6f5-334df6a86ceb-kube-api-access-4d2r4\") pod \"barbican-operator-controller-manager-7768f8c84f-kt7s2\" (UID: \"9fe52e6b-130d-42a1-b6f5-334df6a86ceb\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.103026 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5jg\" (UniqueName: \"kubernetes.io/projected/5dfa0ca5-a839-48ed-be21-2f065840d1f9-kube-api-access-4s5jg\") pod \"designate-operator-controller-manager-56dfb6b67f-fhfm5\" (UID: \"5dfa0ca5-a839-48ed-be21-2f065840d1f9\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.111165 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.112224 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.114607 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kbbsl" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.114890 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.122122 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.123352 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.128352 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8qb8d" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.128492 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.129562 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.131413 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-l6dvd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.155795 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.168480 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.184258 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.185642 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.189662 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-xdh2h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204640 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t47d6\" (UniqueName: \"kubernetes.io/projected/08f2612c-cf4d-42a5-81df-238887b3e77d-kube-api-access-t47d6\") pod \"cinder-operator-controller-manager-6d8fd67bf7-jvdch\" (UID: \"08f2612c-cf4d-42a5-81df-238887b3e77d\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204693 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdhd\" (UniqueName: \"kubernetes.io/projected/9cac3108-fd73-4cfb-a801-b255fcaf9860-kube-api-access-lvdhd\") pod \"manila-operator-controller-manager-7bb88cb858-4rjh7\" (UID: \"9cac3108-fd73-4cfb-a801-b255fcaf9860\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204714 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9j5\" (UniqueName: \"kubernetes.io/projected/e9c6dc0a-9868-4554-8563-d6b83ed3d26b-kube-api-access-7j9j5\") pod \"keystone-operator-controller-manager-7879fb76fd-l44mr\" (UID: \"e9c6dc0a-9868-4554-8563-d6b83ed3d26b\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204749 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zmq\" (UniqueName: \"kubernetes.io/projected/a583f2f1-c89f-499a-a884-959c259bb45f-kube-api-access-67zmq\") pod \"horizon-operator-controller-manager-5d86b44686-5qfzn\" (UID: \"a583f2f1-c89f-499a-a884-959c259bb45f\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204769 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlcbd\" (UniqueName: \"kubernetes.io/projected/cbf8310b-969a-4233-9224-5fead64dda9e-kube-api-access-vlcbd\") pod \"heat-operator-controller-manager-bf4c6585d-bdkmj\" (UID: \"cbf8310b-969a-4233-9224-5fead64dda9e\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5jg\" (UniqueName: \"kubernetes.io/projected/5dfa0ca5-a839-48ed-be21-2f065840d1f9-kube-api-access-4s5jg\") pod \"designate-operator-controller-manager-56dfb6b67f-fhfm5\" (UID: \"5dfa0ca5-a839-48ed-be21-2f065840d1f9\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204816 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2r4\" (UniqueName: \"kubernetes.io/projected/9fe52e6b-130d-42a1-b6f5-334df6a86ceb-kube-api-access-4d2r4\") pod \"barbican-operator-controller-manager-7768f8c84f-kt7s2\" (UID: \"9fe52e6b-130d-42a1-b6f5-334df6a86ceb\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204838 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzdl2\" (UniqueName: \"kubernetes.io/projected/f572ed19-6b6b-43e6-a2b9-b59bfe403460-kube-api-access-jzdl2\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204858 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn47w\" (UniqueName: \"kubernetes.io/projected/74d47023-c608-443f-a863-521ec94aef70-kube-api-access-vn47w\") pod \"glance-operator-controller-manager-8667fbf6f6-kw5fz\" (UID: \"74d47023-c608-443f-a863-521ec94aef70\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204886 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.204904 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t77\" (UniqueName: \"kubernetes.io/projected/c8611d57-ca99-4b6d-ade8-f7f3bce489f4-kube-api-access-g2t77\") pod \"ironic-operator-controller-manager-5c75d7c94b-4fk5h\" (UID: \"c8611d57-ca99-4b6d-ade8-f7f3bce489f4\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.205737 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.219533 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.234269 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.235639 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.242444 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-th44q" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.247433 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.248458 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.260931 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t47d6\" (UniqueName: \"kubernetes.io/projected/08f2612c-cf4d-42a5-81df-238887b3e77d-kube-api-access-t47d6\") pod \"cinder-operator-controller-manager-6d8fd67bf7-jvdch\" (UID: \"08f2612c-cf4d-42a5-81df-238887b3e77d\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.265592 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kkd2m" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.265992 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.266066 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2r4\" (UniqueName: \"kubernetes.io/projected/9fe52e6b-130d-42a1-b6f5-334df6a86ceb-kube-api-access-4d2r4\") pod \"barbican-operator-controller-manager-7768f8c84f-kt7s2\" (UID: \"9fe52e6b-130d-42a1-b6f5-334df6a86ceb\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.270390 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5jg\" (UniqueName: \"kubernetes.io/projected/5dfa0ca5-a839-48ed-be21-2f065840d1f9-kube-api-access-4s5jg\") pod \"designate-operator-controller-manager-56dfb6b67f-fhfm5\" (UID: \"5dfa0ca5-a839-48ed-be21-2f065840d1f9\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.304265 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306151 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zmq\" (UniqueName: \"kubernetes.io/projected/a583f2f1-c89f-499a-a884-959c259bb45f-kube-api-access-67zmq\") pod \"horizon-operator-controller-manager-5d86b44686-5qfzn\" (UID: \"a583f2f1-c89f-499a-a884-959c259bb45f\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306186 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlcbd\" (UniqueName: \"kubernetes.io/projected/cbf8310b-969a-4233-9224-5fead64dda9e-kube-api-access-vlcbd\") pod \"heat-operator-controller-manager-bf4c6585d-bdkmj\" (UID: \"cbf8310b-969a-4233-9224-5fead64dda9e\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzdl2\" (UniqueName: \"kubernetes.io/projected/f572ed19-6b6b-43e6-a2b9-b59bfe403460-kube-api-access-jzdl2\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306265 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn47w\" (UniqueName: \"kubernetes.io/projected/74d47023-c608-443f-a863-521ec94aef70-kube-api-access-vn47w\") pod \"glance-operator-controller-manager-8667fbf6f6-kw5fz\" (UID: \"74d47023-c608-443f-a863-521ec94aef70\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306294 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306313 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t77\" (UniqueName: \"kubernetes.io/projected/c8611d57-ca99-4b6d-ade8-f7f3bce489f4-kube-api-access-g2t77\") pod \"ironic-operator-controller-manager-5c75d7c94b-4fk5h\" (UID: \"c8611d57-ca99-4b6d-ade8-f7f3bce489f4\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdhd\" (UniqueName: \"kubernetes.io/projected/9cac3108-fd73-4cfb-a801-b255fcaf9860-kube-api-access-lvdhd\") pod \"manila-operator-controller-manager-7bb88cb858-4rjh7\" (UID: \"9cac3108-fd73-4cfb-a801-b255fcaf9860\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.306366 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9j5\" (UniqueName: \"kubernetes.io/projected/e9c6dc0a-9868-4554-8563-d6b83ed3d26b-kube-api-access-7j9j5\") pod \"keystone-operator-controller-manager-7879fb76fd-l44mr\" (UID: \"e9c6dc0a-9868-4554-8563-d6b83ed3d26b\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.308530 4988 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.308583 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert podName:f572ed19-6b6b-43e6-a2b9-b59bfe403460 nodeName:}" failed. No retries permitted until 2025-11-23 07:02:36.80856537 +0000 UTC m=+1009.117078133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert") pod "infra-operator-controller-manager-769d9c7585-hv2bc" (UID: "f572ed19-6b6b-43e6-a2b9-b59bfe403460") : secret "infra-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.315247 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.322369 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.329300 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.334272 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.337418 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.344750 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bd6lx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.360303 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.361453 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.367383 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-psrz7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.370235 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9j5\" (UniqueName: \"kubernetes.io/projected/e9c6dc0a-9868-4554-8563-d6b83ed3d26b-kube-api-access-7j9j5\") pod \"keystone-operator-controller-manager-7879fb76fd-l44mr\" (UID: \"e9c6dc0a-9868-4554-8563-d6b83ed3d26b\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.377929 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn47w\" (UniqueName: \"kubernetes.io/projected/74d47023-c608-443f-a863-521ec94aef70-kube-api-access-vn47w\") pod \"glance-operator-controller-manager-8667fbf6f6-kw5fz\" (UID: \"74d47023-c608-443f-a863-521ec94aef70\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.378014 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.378341 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlcbd\" (UniqueName: \"kubernetes.io/projected/cbf8310b-969a-4233-9224-5fead64dda9e-kube-api-access-vlcbd\") pod \"heat-operator-controller-manager-bf4c6585d-bdkmj\" (UID: \"cbf8310b-969a-4233-9224-5fead64dda9e\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.382791 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zmq\" (UniqueName: \"kubernetes.io/projected/a583f2f1-c89f-499a-a884-959c259bb45f-kube-api-access-67zmq\") pod \"horizon-operator-controller-manager-5d86b44686-5qfzn\" (UID: \"a583f2f1-c89f-499a-a884-959c259bb45f\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.392908 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.398174 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.401728 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdhd\" (UniqueName: \"kubernetes.io/projected/9cac3108-fd73-4cfb-a801-b255fcaf9860-kube-api-access-lvdhd\") pod \"manila-operator-controller-manager-7bb88cb858-4rjh7\" (UID: \"9cac3108-fd73-4cfb-a801-b255fcaf9860\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.406159 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzdl2\" (UniqueName: \"kubernetes.io/projected/f572ed19-6b6b-43e6-a2b9-b59bfe403460-kube-api-access-jzdl2\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.436562 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t77\" (UniqueName: \"kubernetes.io/projected/c8611d57-ca99-4b6d-ade8-f7f3bce489f4-kube-api-access-g2t77\") pod \"ironic-operator-controller-manager-5c75d7c94b-4fk5h\" (UID: \"c8611d57-ca99-4b6d-ade8-f7f3bce489f4\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.437020 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.453474 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqhp\" (UniqueName: \"kubernetes.io/projected/bf7f663a-839b-4e58-b112-4da1e76f2def-kube-api-access-tdqhp\") pod \"neutron-operator-controller-manager-66b7d6f598-5jtrf\" (UID: \"bf7f663a-839b-4e58-b112-4da1e76f2def\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.453568 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nv5\" (UniqueName: \"kubernetes.io/projected/3c7c9d25-87ae-421d-9f54-f74ff0b68e49-kube-api-access-64nv5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-pnlkp\" (UID: \"3c7c9d25-87ae-421d-9f54-f74ff0b68e49\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.453599 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgg8\" (UniqueName: \"kubernetes.io/projected/84928c23-bc05-448f-bd61-ce4f32c0edea-kube-api-access-6hgg8\") pod \"octavia-operator-controller-manager-6fdc856c5d-5hc84\" (UID: \"84928c23-bc05-448f-bd61-ce4f32c0edea\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.453652 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgf5d\" (UniqueName: \"kubernetes.io/projected/e3313b12-ee41-4c3d-82dd-be1f78194c70-kube-api-access-dgf5d\") pod \"nova-operator-controller-manager-86d796d84d-cbktd\" (UID: \"e3313b12-ee41-4c3d-82dd-be1f78194c70\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.469662 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.472748 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.479404 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.505615 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.505945 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zs9qr" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.528622 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.544317 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.544639 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.553503 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.555923 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgf5d\" (UniqueName: \"kubernetes.io/projected/e3313b12-ee41-4c3d-82dd-be1f78194c70-kube-api-access-dgf5d\") pod \"nova-operator-controller-manager-86d796d84d-cbktd\" (UID: \"e3313b12-ee41-4c3d-82dd-be1f78194c70\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.556047 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqhp\" (UniqueName: \"kubernetes.io/projected/bf7f663a-839b-4e58-b112-4da1e76f2def-kube-api-access-tdqhp\") pod \"neutron-operator-controller-manager-66b7d6f598-5jtrf\" (UID: \"bf7f663a-839b-4e58-b112-4da1e76f2def\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.556095 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nv5\" (UniqueName: \"kubernetes.io/projected/3c7c9d25-87ae-421d-9f54-f74ff0b68e49-kube-api-access-64nv5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-pnlkp\" (UID: \"3c7c9d25-87ae-421d-9f54-f74ff0b68e49\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.556113 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgg8\" (UniqueName: \"kubernetes.io/projected/84928c23-bc05-448f-bd61-ce4f32c0edea-kube-api-access-6hgg8\") pod \"octavia-operator-controller-manager-6fdc856c5d-5hc84\" (UID: \"84928c23-bc05-448f-bd61-ce4f32c0edea\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.597951 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.603368 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.605012 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.606675 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.613817 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.619383 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.620332 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.622646 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f857d" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.623002 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-crgks" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.623463 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fqfgx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.647837 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nv5\" (UniqueName: \"kubernetes.io/projected/3c7c9d25-87ae-421d-9f54-f74ff0b68e49-kube-api-access-64nv5\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-pnlkp\" (UID: \"3c7c9d25-87ae-421d-9f54-f74ff0b68e49\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.653855 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgg8\" (UniqueName: \"kubernetes.io/projected/84928c23-bc05-448f-bd61-ce4f32c0edea-kube-api-access-6hgg8\") pod \"octavia-operator-controller-manager-6fdc856c5d-5hc84\" (UID: \"84928c23-bc05-448f-bd61-ce4f32c0edea\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.655240 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.657641 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.657842 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbkt4\" (UniqueName: \"kubernetes.io/projected/f7d8486e-f61f-46a0-9e04-1fefad43dede-kube-api-access-bbkt4\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.658409 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.669300 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.671232 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.672462 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.676036 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d65kb" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.689310 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.690487 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgf5d\" (UniqueName: \"kubernetes.io/projected/e3313b12-ee41-4c3d-82dd-be1f78194c70-kube-api-access-dgf5d\") pod \"nova-operator-controller-manager-86d796d84d-cbktd\" (UID: \"e3313b12-ee41-4c3d-82dd-be1f78194c70\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.693701 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqhp\" (UniqueName: \"kubernetes.io/projected/bf7f663a-839b-4e58-b112-4da1e76f2def-kube-api-access-tdqhp\") pod \"neutron-operator-controller-manager-66b7d6f598-5jtrf\" (UID: \"bf7f663a-839b-4e58-b112-4da1e76f2def\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.700600 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.731286 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.732994 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.733988 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.734105 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.735178 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.739303 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8fspz" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.739422 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n2t84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.744284 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768000 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768398 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmrzq\" (UniqueName: \"kubernetes.io/projected/18e43e77-85d1-4d8f-a8f4-06c8e121b817-kube-api-access-kmrzq\") pod \"swift-operator-controller-manager-799cb6ffd6-qzzrd\" (UID: \"18e43e77-85d1-4d8f-a8f4-06c8e121b817\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768466 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgbc\" (UniqueName: \"kubernetes.io/projected/55fb8350-1b05-4077-829b-37df675ff824-kube-api-access-msgbc\") pod \"placement-operator-controller-manager-6dc664666c-rnmg7\" (UID: \"55fb8350-1b05-4077-829b-37df675ff824\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768520 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8mp\" (UniqueName: \"kubernetes.io/projected/a16f0be7-aa48-4002-b8e4-382ae125a870-kube-api-access-xs8mp\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-hk9rx\" (UID: \"a16f0be7-aa48-4002-b8e4-382ae125a870\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768574 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbkt4\" (UniqueName: \"kubernetes.io/projected/f7d8486e-f61f-46a0-9e04-1fefad43dede-kube-api-access-bbkt4\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.768594 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtpk\" (UniqueName: \"kubernetes.io/projected/a29c09da-ecac-46e1-9680-523e311135ed-kube-api-access-9xtpk\") pod \"telemetry-operator-controller-manager-7798859c74-xrsgl\" (UID: \"a29c09da-ecac-46e1-9680-523e311135ed\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.768768 4988 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.768825 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert podName:f7d8486e-f61f-46a0-9e04-1fefad43dede nodeName:}" failed. No retries permitted until 2025-11-23 07:02:37.268807956 +0000 UTC m=+1009.577320719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" (UID: "f7d8486e-f61f-46a0-9e04-1fefad43dede") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.803436 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.811918 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbkt4\" (UniqueName: \"kubernetes.io/projected/f7d8486e-f61f-46a0-9e04-1fefad43dede-kube-api-access-bbkt4\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.833007 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.834393 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.838527 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.843543 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2749s" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.852835 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.874964 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmrzq\" (UniqueName: \"kubernetes.io/projected/18e43e77-85d1-4d8f-a8f4-06c8e121b817-kube-api-access-kmrzq\") pod \"swift-operator-controller-manager-799cb6ffd6-qzzrd\" (UID: \"18e43e77-85d1-4d8f-a8f4-06c8e121b817\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875027 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875048 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msgbc\" (UniqueName: \"kubernetes.io/projected/55fb8350-1b05-4077-829b-37df675ff824-kube-api-access-msgbc\") pod \"placement-operator-controller-manager-6dc664666c-rnmg7\" (UID: \"55fb8350-1b05-4077-829b-37df675ff824\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875126 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8mp\" (UniqueName: \"kubernetes.io/projected/a16f0be7-aa48-4002-b8e4-382ae125a870-kube-api-access-xs8mp\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-hk9rx\" (UID: \"a16f0be7-aa48-4002-b8e4-382ae125a870\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875160 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzr7\" (UniqueName: \"kubernetes.io/projected/269b1e70-13f3-412b-a957-a47eb5713b1e-kube-api-access-9hzr7\") pod \"test-operator-controller-manager-8464cf66df-8xtzh\" (UID: \"269b1e70-13f3-412b-a957-a47eb5713b1e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875249 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtpk\" (UniqueName: \"kubernetes.io/projected/a29c09da-ecac-46e1-9680-523e311135ed-kube-api-access-9xtpk\") pod \"telemetry-operator-controller-manager-7798859c74-xrsgl\" (UID: \"a29c09da-ecac-46e1-9680-523e311135ed\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.875268 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9585\" (UniqueName: \"kubernetes.io/projected/345e22ca-f6b2-417f-80ab-c59f9957fd20-kube-api-access-w9585\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sghjg\" (UID: \"345e22ca-f6b2-417f-80ab-c59f9957fd20\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.876146 4988 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: E1123 07:02:36.879478 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert podName:f572ed19-6b6b-43e6-a2b9-b59bfe403460 nodeName:}" failed. No retries permitted until 2025-11-23 07:02:37.879435159 +0000 UTC m=+1010.187947922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert") pod "infra-operator-controller-manager-769d9c7585-hv2bc" (UID: "f572ed19-6b6b-43e6-a2b9-b59bfe403460") : secret "infra-operator-webhook-server-cert" not found Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.922581 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtpk\" (UniqueName: \"kubernetes.io/projected/a29c09da-ecac-46e1-9680-523e311135ed-kube-api-access-9xtpk\") pod \"telemetry-operator-controller-manager-7798859c74-xrsgl\" (UID: \"a29c09da-ecac-46e1-9680-523e311135ed\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.923207 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmrzq\" (UniqueName: \"kubernetes.io/projected/18e43e77-85d1-4d8f-a8f4-06c8e121b817-kube-api-access-kmrzq\") pod \"swift-operator-controller-manager-799cb6ffd6-qzzrd\" (UID: \"18e43e77-85d1-4d8f-a8f4-06c8e121b817\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.924479 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8mp\" (UniqueName: \"kubernetes.io/projected/a16f0be7-aa48-4002-b8e4-382ae125a870-kube-api-access-xs8mp\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-hk9rx\" (UID: \"a16f0be7-aa48-4002-b8e4-382ae125a870\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.930629 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msgbc\" (UniqueName: \"kubernetes.io/projected/55fb8350-1b05-4077-829b-37df675ff824-kube-api-access-msgbc\") pod \"placement-operator-controller-manager-6dc664666c-rnmg7\" (UID: \"55fb8350-1b05-4077-829b-37df675ff824\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.938508 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.940598 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.944819 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9rnb6" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.956645 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g"] Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.977398 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.977482 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzr7\" (UniqueName: \"kubernetes.io/projected/269b1e70-13f3-412b-a957-a47eb5713b1e-kube-api-access-9hzr7\") pod \"test-operator-controller-manager-8464cf66df-8xtzh\" (UID: \"269b1e70-13f3-412b-a957-a47eb5713b1e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.977527 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9585\" (UniqueName: \"kubernetes.io/projected/345e22ca-f6b2-417f-80ab-c59f9957fd20-kube-api-access-w9585\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sghjg\" (UID: \"345e22ca-f6b2-417f-80ab-c59f9957fd20\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.977548 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52957\" (UniqueName: \"kubernetes.io/projected/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-kube-api-access-52957\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:36 crc kubenswrapper[4988]: I1123 07:02:36.996753 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.000847 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzr7\" (UniqueName: \"kubernetes.io/projected/269b1e70-13f3-412b-a957-a47eb5713b1e-kube-api-access-9hzr7\") pod \"test-operator-controller-manager-8464cf66df-8xtzh\" (UID: \"269b1e70-13f3-412b-a957-a47eb5713b1e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.009869 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9585\" (UniqueName: \"kubernetes.io/projected/345e22ca-f6b2-417f-80ab-c59f9957fd20-kube-api-access-w9585\") pod \"watcher-operator-controller-manager-7cd4fb6f79-sghjg\" (UID: \"345e22ca-f6b2-417f-80ab-c59f9957fd20\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.019136 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.049260 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.081205 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52957\" (UniqueName: \"kubernetes.io/projected/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-kube-api-access-52957\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.081300 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97z74\" (UniqueName: \"kubernetes.io/projected/6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128-kube-api-access-97z74\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g\" (UID: \"6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.081324 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: E1123 07:02:37.081824 4988 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 07:02:37 crc kubenswrapper[4988]: E1123 07:02:37.081896 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert podName:ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8 nodeName:}" failed. No retries permitted until 2025-11-23 07:02:37.581877023 +0000 UTC m=+1009.890389786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-rdk8z" (UID: "ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8") : secret "webhook-server-cert" not found Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.110798 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.126968 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52957\" (UniqueName: \"kubernetes.io/projected/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-kube-api-access-52957\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.145634 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.183256 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97z74\" (UniqueName: \"kubernetes.io/projected/6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128-kube-api-access-97z74\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g\" (UID: \"6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.191785 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.202791 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97z74\" (UniqueName: \"kubernetes.io/projected/6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128-kube-api-access-97z74\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g\" (UID: \"6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.225185 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.285281 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:37 crc kubenswrapper[4988]: E1123 07:02:37.285541 4988 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:02:37 crc kubenswrapper[4988]: E1123 07:02:37.285589 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert podName:f7d8486e-f61f-46a0-9e04-1fefad43dede nodeName:}" failed. No retries permitted until 2025-11-23 07:02:38.285573698 +0000 UTC m=+1010.594086461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" (UID: "f7d8486e-f61f-46a0-9e04-1fefad43dede") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.493579 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch"] Nov 23 07:02:37 crc kubenswrapper[4988]: W1123 07:02:37.516158 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08f2612c_cf4d_42a5_81df_238887b3e77d.slice/crio-ee4c70e97a28b96ad94c2d7bf6f8784fbf428a94ba80f6be852031dc80c082b7 WatchSource:0}: Error finding container ee4c70e97a28b96ad94c2d7bf6f8784fbf428a94ba80f6be852031dc80c082b7: Status 404 returned error can't find the container with id ee4c70e97a28b96ad94c2d7bf6f8784fbf428a94ba80f6be852031dc80c082b7 Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.522329 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5"] Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.597394 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.623843 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-rdk8z\" (UID: \"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.785227 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj"] Nov 23 07:02:37 crc kubenswrapper[4988]: W1123 07:02:37.795090 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf8310b_969a_4233_9224_5fead64dda9e.slice/crio-7c148dda8d3d1678173b62139aee73da934ea6da46865e959108f554d65f33f4 WatchSource:0}: Error finding container 7c148dda8d3d1678173b62139aee73da934ea6da46865e959108f554d65f33f4: Status 404 returned error can't find the container with id 7c148dda8d3d1678173b62139aee73da934ea6da46865e959108f554d65f33f4 Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.796015 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn"] Nov 23 07:02:37 crc kubenswrapper[4988]: W1123 07:02:37.801555 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda583f2f1_c89f_499a_a884_959c259bb45f.slice/crio-3ab8f03fd08c862d262db4b50af939ec8ddded2193d5ba0934646dba6cfb7b0e WatchSource:0}: Error finding container 3ab8f03fd08c862d262db4b50af939ec8ddded2193d5ba0934646dba6cfb7b0e: Status 404 returned error can't find the container with id 3ab8f03fd08c862d262db4b50af939ec8ddded2193d5ba0934646dba6cfb7b0e Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.806691 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.902146 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.906264 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f572ed19-6b6b-43e6-a2b9-b59bfe403460-cert\") pod \"infra-operator-controller-manager-769d9c7585-hv2bc\" (UID: \"f572ed19-6b6b-43e6-a2b9-b59bfe403460\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:37 crc kubenswrapper[4988]: I1123 07:02:37.936781 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.164870 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" event={"ID":"a583f2f1-c89f-499a-a884-959c259bb45f","Type":"ContainerStarted","Data":"3ab8f03fd08c862d262db4b50af939ec8ddded2193d5ba0934646dba6cfb7b0e"} Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.173537 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" event={"ID":"5dfa0ca5-a839-48ed-be21-2f065840d1f9","Type":"ContainerStarted","Data":"00a4365134b621643751ad26772f566080fbc7b2796f0ca9471cb41dc9a620f2"} Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.179011 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" event={"ID":"cbf8310b-969a-4233-9224-5fead64dda9e","Type":"ContainerStarted","Data":"7c148dda8d3d1678173b62139aee73da934ea6da46865e959108f554d65f33f4"} Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.181119 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.193121 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" event={"ID":"08f2612c-cf4d-42a5-81df-238887b3e77d","Type":"ContainerStarted","Data":"ee4c70e97a28b96ad94c2d7bf6f8784fbf428a94ba80f6be852031dc80c082b7"} Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.196735 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.206682 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.217306 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.223954 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.232691 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.239008 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr"] Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.240257 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7c9d25_87ae_421d_9f54_f74ff0b68e49.slice/crio-3ceee197ea226154bdbb2c9e1414ddc40601353aea06fa0771e01142e414740c WatchSource:0}: Error finding container 3ceee197ea226154bdbb2c9e1414ddc40601353aea06fa0771e01142e414740c: Status 404 returned error can't find the container with id 3ceee197ea226154bdbb2c9e1414ddc40601353aea06fa0771e01142e414740c Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.245535 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2"] Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.261336 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe52e6b_130d_42a1_b6f5_334df6a86ceb.slice/crio-e0530e7e924f12fefd01ce065777e4dbd029b7d62488e67a7661fce8a5abd5c6 WatchSource:0}: Error finding container e0530e7e924f12fefd01ce065777e4dbd029b7d62488e67a7661fce8a5abd5c6: Status 404 returned error can't find the container with id e0530e7e924f12fefd01ce065777e4dbd029b7d62488e67a7661fce8a5abd5c6 Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.278628 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf7f663a_839b_4e58_b112_4da1e76f2def.slice/crio-caf324576ac53128bbbb26f0d5d9ebd5c5ea23c8f6b3a56340942e75d69dda17 WatchSource:0}: Error finding container caf324576ac53128bbbb26f0d5d9ebd5c5ea23c8f6b3a56340942e75d69dda17: Status 404 returned error can't find the container with id caf324576ac53128bbbb26f0d5d9ebd5c5ea23c8f6b3a56340942e75d69dda17 Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.314153 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.335215 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7d8486e-f61f-46a0-9e04-1fefad43dede-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd447lpqt\" (UID: \"f7d8486e-f61f-46a0-9e04-1fefad43dede\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.366976 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.424957 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.449289 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.470726 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.489690 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.525566 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.525596 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.528331 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd"] Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.533473 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc"] Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.553914 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kmrzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-799cb6ffd6-qzzrd_openstack-operators(18e43e77-85d1-4d8f-a8f4-06c8e121b817): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.554044 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvdhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7bb88cb858-4rjh7_openstack-operators(9cac3108-fd73-4cfb-a801-b255fcaf9860): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.556824 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16f0be7_aa48_4002_b8e4_382ae125a870.slice/crio-ad39b1008cadac9681db2840858b1ff9767f64da21b9aa04dddc8bc5d09f35bc WatchSource:0}: Error finding container ad39b1008cadac9681db2840858b1ff9767f64da21b9aa04dddc8bc5d09f35bc: Status 404 returned error can't find the container with id ad39b1008cadac9681db2840858b1ff9767f64da21b9aa04dddc8bc5d09f35bc Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.559115 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345e22ca_f6b2_417f_80ab_c59f9957fd20.slice/crio-dce68afee338c823a4557396a19ca90d9287b1c3c64b2f7d58d714f7382b3d0a WatchSource:0}: Error finding container dce68afee338c823a4557396a19ca90d9287b1c3c64b2f7d58d714f7382b3d0a: Status 404 returned error can't find the container with id dce68afee338c823a4557396a19ca90d9287b1c3c64b2f7d58d714f7382b3d0a Nov 23 07:02:38 crc kubenswrapper[4988]: W1123 07:02:38.560793 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf572ed19_6b6b_43e6_a2b9_b59bfe403460.slice/crio-7c8d9f20cd7701598636a8e346a336573c0e8ad76064609e84c16518b0c79757 WatchSource:0}: Error finding container 7c8d9f20cd7701598636a8e346a336573c0e8ad76064609e84c16518b0c79757: Status 404 returned error can't find the container with id 7c8d9f20cd7701598636a8e346a336573c0e8ad76064609e84c16518b0c79757 Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.560913 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xs8mp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-hk9rx_openstack-operators(a16f0be7-aa48-4002-b8e4-382ae125a870): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.570612 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzdl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-769d9c7585-hv2bc_openstack-operators(f572ed19-6b6b-43e6-a2b9-b59bfe403460): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.578901 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh"] Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.579120 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9585,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-sghjg_openstack-operators(345e22ca-f6b2-417f-80ab-c59f9957fd20): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: E1123 07:02:38.624698 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9hzr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-8xtzh_openstack-operators(269b1e70-13f3-412b-a957-a47eb5713b1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:02:38 crc kubenswrapper[4988]: I1123 07:02:38.629277 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z"] Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.099821 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" podUID="9cac3108-fd73-4cfb-a801-b255fcaf9860" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.212112 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" event={"ID":"a16f0be7-aa48-4002-b8e4-382ae125a870","Type":"ContainerStarted","Data":"ad39b1008cadac9681db2840858b1ff9767f64da21b9aa04dddc8bc5d09f35bc"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.218915 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" event={"ID":"345e22ca-f6b2-417f-80ab-c59f9957fd20","Type":"ContainerStarted","Data":"dce68afee338c823a4557396a19ca90d9287b1c3c64b2f7d58d714f7382b3d0a"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.236931 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt"] Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.237865 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" event={"ID":"a29c09da-ecac-46e1-9680-523e311135ed","Type":"ContainerStarted","Data":"9cb8a878fa050fb137dc5faedc5129ba7cc017746313c53fbc4aff121dbd97ea"} Nov 23 07:02:39 crc kubenswrapper[4988]: W1123 07:02:39.258928 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d8486e_f61f_46a0_9e04_1fefad43dede.slice/crio-091e370a2158701daa77cad81d939f414cd0e19758fc2d1e741af0704576c15f WatchSource:0}: Error finding container 091e370a2158701daa77cad81d939f414cd0e19758fc2d1e741af0704576c15f: Status 404 returned error can't find the container with id 091e370a2158701daa77cad81d939f414cd0e19758fc2d1e741af0704576c15f Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.282126 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" event={"ID":"f572ed19-6b6b-43e6-a2b9-b59bfe403460","Type":"ContainerStarted","Data":"7c8d9f20cd7701598636a8e346a336573c0e8ad76064609e84c16518b0c79757"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.311065 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" event={"ID":"74d47023-c608-443f-a863-521ec94aef70","Type":"ContainerStarted","Data":"370cdf79589303bdffbd3cbab5145ec75829ae93c1f6fb23fcdeddfef08b0cba"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.314702 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" podUID="a16f0be7-aa48-4002-b8e4-382ae125a870" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.322986 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" event={"ID":"55fb8350-1b05-4077-829b-37df675ff824","Type":"ContainerStarted","Data":"bfc84487ae71e1620d99e14a62d1bcdffee0c2e3cd8929539914afe52937624b"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.341075 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" event={"ID":"bf7f663a-839b-4e58-b112-4da1e76f2def","Type":"ContainerStarted","Data":"caf324576ac53128bbbb26f0d5d9ebd5c5ea23c8f6b3a56340942e75d69dda17"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.353901 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" podUID="18e43e77-85d1-4d8f-a8f4-06c8e121b817" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.355936 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" event={"ID":"e9c6dc0a-9868-4554-8563-d6b83ed3d26b","Type":"ContainerStarted","Data":"8ddb2c435aadf08fb3af404d2bf0696cec921a244934b6ae3c4801ce9e8888fc"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.369891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" event={"ID":"18e43e77-85d1-4d8f-a8f4-06c8e121b817","Type":"ContainerStarted","Data":"4cb4c51e0cdd82b395bb299f47717db30d545bd9e6a6aeacd29b6fa3b96e1830"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.373691 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" podUID="18e43e77-85d1-4d8f-a8f4-06c8e121b817" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.380016 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" event={"ID":"6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128","Type":"ContainerStarted","Data":"7d6691c540fd381017a8cfdb460bea08e058dec1f2758fe96749acf2b39044b8"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.396687 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" event={"ID":"e3313b12-ee41-4c3d-82dd-be1f78194c70","Type":"ContainerStarted","Data":"3bcb670d8ecafacf9cae12b72409ec77e6ecc6d920bc5233139e857430e72bfc"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.426050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" event={"ID":"9fe52e6b-130d-42a1-b6f5-334df6a86ceb","Type":"ContainerStarted","Data":"e0530e7e924f12fefd01ce065777e4dbd029b7d62488e67a7661fce8a5abd5c6"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.441284 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" podUID="345e22ca-f6b2-417f-80ab-c59f9957fd20" Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.452100 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" podUID="f572ed19-6b6b-43e6-a2b9-b59bfe403460" Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.469348 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" podUID="269b1e70-13f3-412b-a957-a47eb5713b1e" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.472802 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" event={"ID":"9cac3108-fd73-4cfb-a801-b255fcaf9860","Type":"ContainerStarted","Data":"78db02c75c8b2f7efba83e7d93acf182d88e7eea1dc3a76d1e86da3197cefdd3"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.473409 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" event={"ID":"9cac3108-fd73-4cfb-a801-b255fcaf9860","Type":"ContainerStarted","Data":"543d46618f4786b30104160a4ec2f810646de53cb48a0e373c394e028db28077"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.477151 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" podUID="9cac3108-fd73-4cfb-a801-b255fcaf9860" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.515936 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" event={"ID":"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8","Type":"ContainerStarted","Data":"96d5bfa00d803547a946a2883ea9d9fb47b5d38ee76f0a6c548413f1959b765e"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.542296 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" event={"ID":"269b1e70-13f3-412b-a957-a47eb5713b1e","Type":"ContainerStarted","Data":"d226b8459feb7ff304eec4834236e19c3c5af3bc545c5403d6c3dd41c3418f84"} Nov 23 07:02:39 crc kubenswrapper[4988]: E1123 07:02:39.564493 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" podUID="269b1e70-13f3-412b-a957-a47eb5713b1e" Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.581285 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" event={"ID":"c8611d57-ca99-4b6d-ade8-f7f3bce489f4","Type":"ContainerStarted","Data":"1b64cd6ca6d4bbfcc5b798fd68a139df8f2837c60500eb42a65fa1f9ff50b05f"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.594687 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" event={"ID":"3c7c9d25-87ae-421d-9f54-f74ff0b68e49","Type":"ContainerStarted","Data":"3ceee197ea226154bdbb2c9e1414ddc40601353aea06fa0771e01142e414740c"} Nov 23 07:02:39 crc kubenswrapper[4988]: I1123 07:02:39.603317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" event={"ID":"84928c23-bc05-448f-bd61-ce4f32c0edea","Type":"ContainerStarted","Data":"74d35dcea17d7a69b578a5f38b02fa69eaf7e81110c12e15a122ce1b031ea413"} Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.618122 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" event={"ID":"18e43e77-85d1-4d8f-a8f4-06c8e121b817","Type":"ContainerStarted","Data":"4ad759ea7487ca11a85d94a229339747c15e20c1c99109d1c72b804343282da7"} Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.624312 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" podUID="18e43e77-85d1-4d8f-a8f4-06c8e121b817" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.630875 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" event={"ID":"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8","Type":"ContainerStarted","Data":"ad313da50a6f27e170fd65e11bf7ccabd70f31712112416bd40811846f4276ff"} Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.630924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" event={"ID":"ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8","Type":"ContainerStarted","Data":"435ce01f81fbe12e304b59d0054626987b548699ff37d8a178573b950a71a9a4"} Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.631631 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.645421 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" event={"ID":"f572ed19-6b6b-43e6-a2b9-b59bfe403460","Type":"ContainerStarted","Data":"ccde1233db9ebc43528b0dafa2cf42d6ff8fe56dc21d0e0862b1546dadb6e8e4"} Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.647629 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" podUID="f572ed19-6b6b-43e6-a2b9-b59bfe403460" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.656050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" event={"ID":"269b1e70-13f3-412b-a957-a47eb5713b1e","Type":"ContainerStarted","Data":"917c64b67db8d78a01b9c3cad1c2270f6f779f0c892ba29c182e9c6caa9729cb"} Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.657337 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" podUID="269b1e70-13f3-412b-a957-a47eb5713b1e" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.660756 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" event={"ID":"f7d8486e-f61f-46a0-9e04-1fefad43dede","Type":"ContainerStarted","Data":"091e370a2158701daa77cad81d939f414cd0e19758fc2d1e741af0704576c15f"} Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.663484 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" event={"ID":"a16f0be7-aa48-4002-b8e4-382ae125a870","Type":"ContainerStarted","Data":"9713241c5d8c0885ece98c0097c06475687c234db4d8c22fcf310a7e122e6366"} Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.672520 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" podUID="a16f0be7-aa48-4002-b8e4-382ae125a870" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.677757 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" podStartSLOduration=4.6777370000000005 podStartE2EDuration="4.677737s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:40.673163506 +0000 UTC m=+1012.981676289" watchObservedRunningTime="2025-11-23 07:02:40.677737 +0000 UTC m=+1012.986249753" Nov 23 07:02:40 crc kubenswrapper[4988]: I1123 07:02:40.678382 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" event={"ID":"345e22ca-f6b2-417f-80ab-c59f9957fd20","Type":"ContainerStarted","Data":"0d046f9a4d13da956132c5063fff8b9b2b751eda087825566f5bb20a71a9be74"} Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.679534 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" podUID="9cac3108-fd73-4cfb-a801-b255fcaf9860" Nov 23 07:02:40 crc kubenswrapper[4988]: E1123 07:02:40.681391 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" podUID="345e22ca-f6b2-417f-80ab-c59f9957fd20" Nov 23 07:02:41 crc kubenswrapper[4988]: E1123 07:02:41.685181 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" podUID="f572ed19-6b6b-43e6-a2b9-b59bfe403460" Nov 23 07:02:41 crc kubenswrapper[4988]: E1123 07:02:41.685432 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" podUID="345e22ca-f6b2-417f-80ab-c59f9957fd20" Nov 23 07:02:41 crc kubenswrapper[4988]: E1123 07:02:41.692507 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" podUID="269b1e70-13f3-412b-a957-a47eb5713b1e" Nov 23 07:02:41 crc kubenswrapper[4988]: E1123 07:02:41.695932 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" podUID="18e43e77-85d1-4d8f-a8f4-06c8e121b817" Nov 23 07:02:41 crc kubenswrapper[4988]: E1123 07:02:41.696071 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" podUID="a16f0be7-aa48-4002-b8e4-382ae125a870" Nov 23 07:02:47 crc kubenswrapper[4988]: I1123 07:02:47.812049 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-rdk8z" Nov 23 07:02:51 crc kubenswrapper[4988]: E1123 07:02:51.918124 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7" Nov 23 07:02:51 crc kubenswrapper[4988]: E1123 07:02:51.918796 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:70cce55bcf89468c5d468ca2fc317bfc3dc5f2bef1c502df9faca2eb1293ede7,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d2r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7768f8c84f-kt7s2_openstack-operators(9fe52e6b-130d-42a1-b6f5-334df6a86ceb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:05 crc kubenswrapper[4988]: E1123 07:03:05.303757 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 23 07:03:05 crc kubenswrapper[4988]: E1123 07:03:05.304467 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdqhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-66b7d6f598-5jtrf_openstack-operators(bf7f663a-839b-4e58-b112-4da1e76f2def): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.138406 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.139183 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7j9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7879fb76fd-l44mr_openstack-operators(e9c6dc0a-9868-4554-8563-d6b83ed3d26b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.184987 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.185310 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-64nv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-pnlkp_openstack-operators(3c7c9d25-87ae-421d-9f54-f74ff0b68e49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.662096 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f" Nov 23 07:03:09 crc kubenswrapper[4988]: E1123 07:03:09.662268 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xtpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7798859c74-xrsgl_openstack-operators(a29c09da-ecac-46e1-9680-523e311135ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:10 crc kubenswrapper[4988]: E1123 07:03:10.761693 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 23 07:03:10 crc kubenswrapper[4988]: E1123 07:03:10.762338 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97z74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g_openstack-operators(6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:03:10 crc kubenswrapper[4988]: E1123 07:03:10.763532 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" podUID="6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128" Nov 23 07:03:10 crc kubenswrapper[4988]: E1123 07:03:10.930955 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" podUID="6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128" Nov 23 07:03:16 crc kubenswrapper[4988]: E1123 07:03:16.257818 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" podUID="a29c09da-ecac-46e1-9680-523e311135ed" Nov 23 07:03:16 crc kubenswrapper[4988]: E1123 07:03:16.270323 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" podUID="bf7f663a-839b-4e58-b112-4da1e76f2def" Nov 23 07:03:16 crc kubenswrapper[4988]: E1123 07:03:16.273163 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" podUID="3c7c9d25-87ae-421d-9f54-f74ff0b68e49" Nov 23 07:03:16 crc kubenswrapper[4988]: E1123 07:03:16.285148 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" podUID="e9c6dc0a-9868-4554-8563-d6b83ed3d26b" Nov 23 07:03:16 crc kubenswrapper[4988]: E1123 07:03:16.300423 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" podUID="9fe52e6b-130d-42a1-b6f5-334df6a86ceb" Nov 23 07:03:16 crc kubenswrapper[4988]: I1123 07:03:16.987400 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" event={"ID":"74d47023-c608-443f-a863-521ec94aef70","Type":"ContainerStarted","Data":"737d5f7d5764fc05486f887350f39b2ca46d728603eb91e18cd963c74372a31c"} Nov 23 07:03:16 crc kubenswrapper[4988]: I1123 07:03:16.996344 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" event={"ID":"f572ed19-6b6b-43e6-a2b9-b59bfe403460","Type":"ContainerStarted","Data":"283256baa1b3eac69101ff8dbb9968946caa9f060704266e23fcf559a3269255"} Nov 23 07:03:16 crc kubenswrapper[4988]: I1123 07:03:16.996598 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.002424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" event={"ID":"5dfa0ca5-a839-48ed-be21-2f065840d1f9","Type":"ContainerStarted","Data":"ccec16b809a15240aa14659add3e4210418fc3e3e7add4d3c124e703c7eff607"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.002469 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" event={"ID":"5dfa0ca5-a839-48ed-be21-2f065840d1f9","Type":"ContainerStarted","Data":"71f73f33c308a6e01b1d2ce7e33eaab5b580e94c3fbbdb9cafc44d19151dfb6d"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.002560 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.015050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" event={"ID":"269b1e70-13f3-412b-a957-a47eb5713b1e","Type":"ContainerStarted","Data":"87859ee98abc72168ed30a014a0b39f1c7b3d75bb9aaf28aaf34f0908e00cab7"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.015305 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.035752 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" podStartSLOduration=4.017882881 podStartE2EDuration="41.035735349s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.570496703 +0000 UTC m=+1010.879009466" lastFinishedPulling="2025-11-23 07:03:15.588349171 +0000 UTC m=+1047.896861934" observedRunningTime="2025-11-23 07:03:17.034669153 +0000 UTC m=+1049.343181916" watchObservedRunningTime="2025-11-23 07:03:17.035735349 +0000 UTC m=+1049.344248112" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.036555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" event={"ID":"f7d8486e-f61f-46a0-9e04-1fefad43dede","Type":"ContainerStarted","Data":"99720b1c5be694e9b5e6b5c70c591d4cf1dc674b075e4ba8858effdc797c8f13"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.050681 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" event={"ID":"a16f0be7-aa48-4002-b8e4-382ae125a870","Type":"ContainerStarted","Data":"0ee8b27fbc6e1803146fe7c29d34948c58e5696d91e752b50cf6a765d674e5a9"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.051329 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.064048 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" event={"ID":"e3313b12-ee41-4c3d-82dd-be1f78194c70","Type":"ContainerStarted","Data":"c7a8599c9925116939ab9b7f15f3488533da00f260ce9862f827a399bc584d41"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.071665 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" event={"ID":"9fe52e6b-130d-42a1-b6f5-334df6a86ceb","Type":"ContainerStarted","Data":"510e4c638f7b1e03c0cdd5d5dc67c2634341970c60e90870944630132f0b8370"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.076917 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" event={"ID":"a583f2f1-c89f-499a-a884-959c259bb45f","Type":"ContainerStarted","Data":"5235e68c339f76015e0df9bed7abe98c21c10839839a90025fba3914b173ed03"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.078763 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" event={"ID":"84928c23-bc05-448f-bd61-ce4f32c0edea","Type":"ContainerStarted","Data":"edef6e053298bbb81d4bb19fc17840363475255c74d83f71db980e23f176cfde"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.079020 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.080573 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" event={"ID":"3c7c9d25-87ae-421d-9f54-f74ff0b68e49","Type":"ContainerStarted","Data":"5b0c38b022a8e956a97ce10c9a449b2b7b722d183a841f9122cd5a2d65ccfbac"} Nov 23 07:03:17 crc kubenswrapper[4988]: E1123 07:03:17.081597 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" podUID="3c7c9d25-87ae-421d-9f54-f74ff0b68e49" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.082300 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" event={"ID":"55fb8350-1b05-4077-829b-37df675ff824","Type":"ContainerStarted","Data":"78c088c6406c1558e790624924685b7bf9e262938d4817c5bf02e4b2d8cbb794"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.084438 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" event={"ID":"cbf8310b-969a-4233-9224-5fead64dda9e","Type":"ContainerStarted","Data":"d0dcbefe0273abd5f1d8a4aca2bebf89ee5326ead374ac0d149b8ec90ec82366"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.091682 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" event={"ID":"08f2612c-cf4d-42a5-81df-238887b3e77d","Type":"ContainerStarted","Data":"bb24138013f43332b90f6ad679239128805987cf04e54685801520086daed8c5"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.121409 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" event={"ID":"18e43e77-85d1-4d8f-a8f4-06c8e121b817","Type":"ContainerStarted","Data":"8bc0196e5eac684f06648135c9a2e1c425c078a47085337437caf22c81d3dd3d"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.121785 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.139777 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" event={"ID":"9cac3108-fd73-4cfb-a801-b255fcaf9860","Type":"ContainerStarted","Data":"72ddbd91637fc9a501665e9834c038b9c462af742191c1d296a2a3157831f500"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.140475 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.141512 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" podStartSLOduration=8.972913014 podStartE2EDuration="42.141489004s" podCreationTimestamp="2025-11-23 07:02:35 +0000 UTC" firstStartedPulling="2025-11-23 07:02:37.562235943 +0000 UTC m=+1009.870748706" lastFinishedPulling="2025-11-23 07:03:10.730811933 +0000 UTC m=+1043.039324696" observedRunningTime="2025-11-23 07:03:17.068021658 +0000 UTC m=+1049.376534421" watchObservedRunningTime="2025-11-23 07:03:17.141489004 +0000 UTC m=+1049.450001787" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.154025 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" podStartSLOduration=4.180958793 podStartE2EDuration="41.154002684s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.623982134 +0000 UTC m=+1010.932494897" lastFinishedPulling="2025-11-23 07:03:15.597026015 +0000 UTC m=+1047.905538788" observedRunningTime="2025-11-23 07:03:17.137872835 +0000 UTC m=+1049.446385598" watchObservedRunningTime="2025-11-23 07:03:17.154002684 +0000 UTC m=+1049.462515447" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.177976 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" event={"ID":"c8611d57-ca99-4b6d-ade8-f7f3bce489f4","Type":"ContainerStarted","Data":"524708449e961a57b622b1378d8c457722b33e5ce3846a9a3742f4f8bff3f2a6"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.178825 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.179635 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" podStartSLOduration=4.151775459 podStartE2EDuration="41.179622807s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.560763769 +0000 UTC m=+1010.869276532" lastFinishedPulling="2025-11-23 07:03:15.588611117 +0000 UTC m=+1047.897123880" observedRunningTime="2025-11-23 07:03:17.177479954 +0000 UTC m=+1049.485992727" watchObservedRunningTime="2025-11-23 07:03:17.179622807 +0000 UTC m=+1049.488135570" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.193631 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" event={"ID":"bf7f663a-839b-4e58-b112-4da1e76f2def","Type":"ContainerStarted","Data":"f454ba4dadb1a7b21edb75d4e29ce0275d6c2ad92c9745e7fdaf88669a4df887"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.198794 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" event={"ID":"e9c6dc0a-9868-4554-8563-d6b83ed3d26b","Type":"ContainerStarted","Data":"0c07d1bf5f052256daa655f16e7e317b6521179c61b9cb99d74b14130b6c9e46"} Nov 23 07:03:17 crc kubenswrapper[4988]: E1123 07:03:17.200508 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" podUID="e9c6dc0a-9868-4554-8563-d6b83ed3d26b" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.208811 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" podStartSLOduration=8.20824829 podStartE2EDuration="41.208795027s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.269887069 +0000 UTC m=+1010.578399832" lastFinishedPulling="2025-11-23 07:03:11.270433806 +0000 UTC m=+1043.578946569" observedRunningTime="2025-11-23 07:03:17.205865255 +0000 UTC m=+1049.514378028" watchObservedRunningTime="2025-11-23 07:03:17.208795027 +0000 UTC m=+1049.517307790" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.216799 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" event={"ID":"345e22ca-f6b2-417f-80ab-c59f9957fd20","Type":"ContainerStarted","Data":"06317c6616d837b32732707f79b480129916bcc32d89c7672c99e8dbe690b9f7"} Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.217481 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.226390 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" event={"ID":"a29c09da-ecac-46e1-9680-523e311135ed","Type":"ContainerStarted","Data":"1e2a464c8057837c434d7b430bc4ed3064b08b1610754976707d8319e355916c"} Nov 23 07:03:17 crc kubenswrapper[4988]: E1123 07:03:17.229799 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" podUID="a29c09da-ecac-46e1-9680-523e311135ed" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.279492 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" podStartSLOduration=4.252346723 podStartE2EDuration="41.279476955s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.553795695 +0000 UTC m=+1010.862308458" lastFinishedPulling="2025-11-23 07:03:15.580925927 +0000 UTC m=+1047.889438690" observedRunningTime="2025-11-23 07:03:17.279033084 +0000 UTC m=+1049.587545847" watchObservedRunningTime="2025-11-23 07:03:17.279476955 +0000 UTC m=+1049.587989718" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.412021 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" podStartSLOduration=4.374173555 podStartE2EDuration="41.412004632s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.553959469 +0000 UTC m=+1010.862472232" lastFinishedPulling="2025-11-23 07:03:15.591790506 +0000 UTC m=+1047.900303309" observedRunningTime="2025-11-23 07:03:17.409617003 +0000 UTC m=+1049.718129766" watchObservedRunningTime="2025-11-23 07:03:17.412004632 +0000 UTC m=+1049.720517395" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.479848 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" podStartSLOduration=4.287600142 podStartE2EDuration="41.479826439s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.578956595 +0000 UTC m=+1010.887469358" lastFinishedPulling="2025-11-23 07:03:15.771182892 +0000 UTC m=+1048.079695655" observedRunningTime="2025-11-23 07:03:17.443548822 +0000 UTC m=+1049.752061585" watchObservedRunningTime="2025-11-23 07:03:17.479826439 +0000 UTC m=+1049.788339212" Nov 23 07:03:17 crc kubenswrapper[4988]: I1123 07:03:17.509716 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" podStartSLOduration=6.9382307260000005 podStartE2EDuration="41.509701938s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.235176909 +0000 UTC m=+1010.543689672" lastFinishedPulling="2025-11-23 07:03:12.806648091 +0000 UTC m=+1045.115160884" observedRunningTime="2025-11-23 07:03:17.508513988 +0000 UTC m=+1049.817026751" watchObservedRunningTime="2025-11-23 07:03:17.509701938 +0000 UTC m=+1049.818214701" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.260071 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" event={"ID":"bf7f663a-839b-4e58-b112-4da1e76f2def","Type":"ContainerStarted","Data":"4d1484185efc3c057f01f2cd13344e70a57c217303507b0098e270e67133fcab"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.260461 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.264001 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" event={"ID":"e3313b12-ee41-4c3d-82dd-be1f78194c70","Type":"ContainerStarted","Data":"10d7e7955469ce10a893987c8079e5339f6c3c7bc5c315fcaf909dd366486dd3"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.264132 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.267643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" event={"ID":"9fe52e6b-130d-42a1-b6f5-334df6a86ceb","Type":"ContainerStarted","Data":"413c7d6ada38f512285befef4c5e06ef4cf1348c6eeb5b44ee58c091fd4e8f1d"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.267712 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.269758 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" event={"ID":"08f2612c-cf4d-42a5-81df-238887b3e77d","Type":"ContainerStarted","Data":"a4f97bb37a5302f4b39d81914326cee2dd12de14f489acb70750c3a5fddc63fe"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.269842 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.271312 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" event={"ID":"a583f2f1-c89f-499a-a884-959c259bb45f","Type":"ContainerStarted","Data":"ef48b15eba9ab652d057bc545528ef0e4835aa40ffa4ae2a192ccf1a6a8511ec"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.271511 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.273359 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" event={"ID":"74d47023-c608-443f-a863-521ec94aef70","Type":"ContainerStarted","Data":"0b55ab1d9539fd24c3b10ca7f8a33f5a08e51e08a6f50bfce66337fd5f76d4b0"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.273669 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.278696 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" podStartSLOduration=2.934288192 podStartE2EDuration="42.278682542s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.284741241 +0000 UTC m=+1010.593254004" lastFinishedPulling="2025-11-23 07:03:17.629135591 +0000 UTC m=+1049.937648354" observedRunningTime="2025-11-23 07:03:18.274942069 +0000 UTC m=+1050.583454842" watchObservedRunningTime="2025-11-23 07:03:18.278682542 +0000 UTC m=+1050.587195325" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.280303 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" event={"ID":"55fb8350-1b05-4077-829b-37df675ff824","Type":"ContainerStarted","Data":"40ec84e51b104eefd80b29e29f0e0ff71f827ce3fad2738cdbad42f861ebd10e"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.280376 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.282463 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" event={"ID":"84928c23-bc05-448f-bd61-ce4f32c0edea","Type":"ContainerStarted","Data":"b8368603afddc4795e9eb9d0beb73d16bcdc61fac0ffc126b20a8f0df65c8050"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.284924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" event={"ID":"cbf8310b-969a-4233-9224-5fead64dda9e","Type":"ContainerStarted","Data":"070126c43d45f5dfdd8c40352b18ebb2806093c6e70eca3e6d8e91cac61fec23"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.285071 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.287916 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" event={"ID":"c8611d57-ca99-4b6d-ade8-f7f3bce489f4","Type":"ContainerStarted","Data":"87fec70866e894ba49153f5301df88be2990f2bd06c1a3290cb25ed23baa2a8e"} Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.290136 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" event={"ID":"f7d8486e-f61f-46a0-9e04-1fefad43dede","Type":"ContainerStarted","Data":"10204e04ed03e069f34e8df69e31c7cc42d0ec099e10b1256377f6a8647fea23"} Nov 23 07:03:18 crc kubenswrapper[4988]: E1123 07:03:18.294864 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" podUID="3c7c9d25-87ae-421d-9f54-f74ff0b68e49" Nov 23 07:03:18 crc kubenswrapper[4988]: E1123 07:03:18.295362 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" podUID="a29c09da-ecac-46e1-9680-523e311135ed" Nov 23 07:03:18 crc kubenswrapper[4988]: E1123 07:03:18.295370 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" podUID="e9c6dc0a-9868-4554-8563-d6b83ed3d26b" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.298093 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" podStartSLOduration=3.995320856 podStartE2EDuration="43.298075941s" podCreationTimestamp="2025-11-23 07:02:35 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.271643573 +0000 UTC m=+1010.580156336" lastFinishedPulling="2025-11-23 07:03:17.574398668 +0000 UTC m=+1049.882911421" observedRunningTime="2025-11-23 07:03:18.295034646 +0000 UTC m=+1050.603547429" watchObservedRunningTime="2025-11-23 07:03:18.298075941 +0000 UTC m=+1050.606588724" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.311965 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" podStartSLOduration=8.752064682 podStartE2EDuration="43.311948184s" podCreationTimestamp="2025-11-23 07:02:35 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.246613355 +0000 UTC m=+1010.555126118" lastFinishedPulling="2025-11-23 07:03:12.806496857 +0000 UTC m=+1045.115009620" observedRunningTime="2025-11-23 07:03:18.307647558 +0000 UTC m=+1050.616160331" watchObservedRunningTime="2025-11-23 07:03:18.311948184 +0000 UTC m=+1050.620460957" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.341786 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" podStartSLOduration=9.315846829 podStartE2EDuration="42.341765122s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.244546294 +0000 UTC m=+1010.553059057" lastFinishedPulling="2025-11-23 07:03:11.270464587 +0000 UTC m=+1043.578977350" observedRunningTime="2025-11-23 07:03:18.32917907 +0000 UTC m=+1050.637691843" watchObservedRunningTime="2025-11-23 07:03:18.341765122 +0000 UTC m=+1050.650277895" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.354300 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" podStartSLOduration=7.350851853 podStartE2EDuration="42.354274671s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:37.803824078 +0000 UTC m=+1010.112336841" lastFinishedPulling="2025-11-23 07:03:12.807246866 +0000 UTC m=+1045.115759659" observedRunningTime="2025-11-23 07:03:18.348293483 +0000 UTC m=+1050.656806256" watchObservedRunningTime="2025-11-23 07:03:18.354274671 +0000 UTC m=+1050.662787434" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.365436 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.369393 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" podStartSLOduration=9.61762177 podStartE2EDuration="43.369371084s" podCreationTimestamp="2025-11-23 07:02:35 +0000 UTC" firstStartedPulling="2025-11-23 07:02:37.518777664 +0000 UTC m=+1009.827290427" lastFinishedPulling="2025-11-23 07:03:11.270526978 +0000 UTC m=+1043.579039741" observedRunningTime="2025-11-23 07:03:18.365127009 +0000 UTC m=+1050.673639782" watchObservedRunningTime="2025-11-23 07:03:18.369371084 +0000 UTC m=+1050.677883867" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.382314 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" podStartSLOduration=8.372703302 podStartE2EDuration="43.382296974s" podCreationTimestamp="2025-11-23 07:02:35 +0000 UTC" firstStartedPulling="2025-11-23 07:02:37.796825243 +0000 UTC m=+1010.105338006" lastFinishedPulling="2025-11-23 07:03:12.806418915 +0000 UTC m=+1045.114931678" observedRunningTime="2025-11-23 07:03:18.379942996 +0000 UTC m=+1050.688455769" watchObservedRunningTime="2025-11-23 07:03:18.382296974 +0000 UTC m=+1050.690809737" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.435428 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" podStartSLOduration=8.931260807 podStartE2EDuration="42.435408547s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:39.302489681 +0000 UTC m=+1011.611002444" lastFinishedPulling="2025-11-23 07:03:12.806637391 +0000 UTC m=+1045.115150184" observedRunningTime="2025-11-23 07:03:18.422712813 +0000 UTC m=+1050.731225596" watchObservedRunningTime="2025-11-23 07:03:18.435408547 +0000 UTC m=+1050.743921310" Nov 23 07:03:18 crc kubenswrapper[4988]: I1123 07:03:18.475395 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" podStartSLOduration=9.728548668 podStartE2EDuration="42.475379655s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.523627039 +0000 UTC m=+1010.832139802" lastFinishedPulling="2025-11-23 07:03:11.270458026 +0000 UTC m=+1043.578970789" observedRunningTime="2025-11-23 07:03:18.471894479 +0000 UTC m=+1050.780407242" watchObservedRunningTime="2025-11-23 07:03:18.475379655 +0000 UTC m=+1050.783892418" Nov 23 07:03:25 crc kubenswrapper[4988]: I1123 07:03:25.499493 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.269124 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-jvdch" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.338263 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fhfm5" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.369117 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" event={"ID":"6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128","Type":"ContainerStarted","Data":"ab91ef8d10e3c91892eba97de3f025c45b308c57c6b764895cd3b2e19acf89ba"} Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.389130 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g" podStartSLOduration=2.841088316 podStartE2EDuration="50.389107038s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.517386882 +0000 UTC m=+1010.825899645" lastFinishedPulling="2025-11-23 07:03:26.065405604 +0000 UTC m=+1058.373918367" observedRunningTime="2025-11-23 07:03:26.385786456 +0000 UTC m=+1058.694299239" watchObservedRunningTime="2025-11-23 07:03:26.389107038 +0000 UTC m=+1058.697619841" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.396864 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-5qfzn" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.401981 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-bdkmj" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.471499 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-4fk5h" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.530879 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-4rjh7" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.556428 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-kt7s2" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.677442 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-kw5fz" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.704451 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-5jtrf" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.772054 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-cbktd" Nov 23 07:03:26 crc kubenswrapper[4988]: I1123 07:03:26.807768 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-5hc84" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.023499 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-8xtzh" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.052781 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-sghjg" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.115270 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-rnmg7" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.150338 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qzzrd" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.195596 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-hk9rx" Nov 23 07:03:27 crc kubenswrapper[4988]: I1123 07:03:27.947006 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-hv2bc" Nov 23 07:03:28 crc kubenswrapper[4988]: I1123 07:03:28.373637 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd447lpqt" Nov 23 07:03:31 crc kubenswrapper[4988]: I1123 07:03:31.412410 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" event={"ID":"3c7c9d25-87ae-421d-9f54-f74ff0b68e49","Type":"ContainerStarted","Data":"93979ac18b41f9a5f039e9ff97b803aeee562a03e7fa57e37859424a95745530"} Nov 23 07:03:31 crc kubenswrapper[4988]: I1123 07:03:31.413451 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:03:31 crc kubenswrapper[4988]: I1123 07:03:31.435019 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" podStartSLOduration=2.794547925 podStartE2EDuration="55.434993751s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.280920195 +0000 UTC m=+1010.589432958" lastFinishedPulling="2025-11-23 07:03:30.921365981 +0000 UTC m=+1063.229878784" observedRunningTime="2025-11-23 07:03:31.430337706 +0000 UTC m=+1063.738850509" watchObservedRunningTime="2025-11-23 07:03:31.434993751 +0000 UTC m=+1063.743506534" Nov 23 07:03:36 crc kubenswrapper[4988]: I1123 07:03:36.661411 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-pnlkp" Nov 23 07:03:36 crc kubenswrapper[4988]: I1123 07:03:36.721440 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" event={"ID":"a29c09da-ecac-46e1-9680-523e311135ed","Type":"ContainerStarted","Data":"fd6a700569346ac84a4faee35c0d1370759df2ee4304d95c38bbfc600e7fb8da"} Nov 23 07:03:36 crc kubenswrapper[4988]: I1123 07:03:36.723339 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" event={"ID":"e9c6dc0a-9868-4554-8563-d6b83ed3d26b","Type":"ContainerStarted","Data":"9073777a80835dde29da11252e58a5bc9833c71863da7804978f960cb4a1561d"} Nov 23 07:03:36 crc kubenswrapper[4988]: I1123 07:03:36.723579 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:03:36 crc kubenswrapper[4988]: I1123 07:03:36.742507 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" podStartSLOduration=3.648707416 podStartE2EDuration="1m0.742487534s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.310340803 +0000 UTC m=+1010.618853566" lastFinishedPulling="2025-11-23 07:03:35.404120881 +0000 UTC m=+1067.712633684" observedRunningTime="2025-11-23 07:03:36.739689735 +0000 UTC m=+1069.048202498" watchObservedRunningTime="2025-11-23 07:03:36.742487534 +0000 UTC m=+1069.051000297" Nov 23 07:03:37 crc kubenswrapper[4988]: I1123 07:03:37.765513 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" podStartSLOduration=4.8865734960000005 podStartE2EDuration="1m1.765484449s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="2025-11-23 07:02:38.525215588 +0000 UTC m=+1010.833728351" lastFinishedPulling="2025-11-23 07:03:35.404126511 +0000 UTC m=+1067.712639304" observedRunningTime="2025-11-23 07:03:37.757158563 +0000 UTC m=+1070.065671386" watchObservedRunningTime="2025-11-23 07:03:37.765484449 +0000 UTC m=+1070.073997252" Nov 23 07:03:46 crc kubenswrapper[4988]: I1123 07:03:46.475060 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-l44mr" Nov 23 07:03:46 crc kubenswrapper[4988]: I1123 07:03:46.998827 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:03:47 crc kubenswrapper[4988]: I1123 07:03:47.000843 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-xrsgl" Nov 23 07:03:51 crc kubenswrapper[4988]: I1123 07:03:51.672251 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:03:51 crc kubenswrapper[4988]: I1123 07:03:51.672742 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.509808 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.511795 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.521169 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.521758 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.522139 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.522207 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.522446 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zzhcd" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.579819 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.581842 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.585879 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.594315 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.674453 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.674536 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.674614 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.674654 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.674707 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tx9\" (UniqueName: \"kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.775819 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.775875 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.775937 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tx9\" (UniqueName: \"kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.775961 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.775989 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.776670 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.776677 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.777280 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.803885 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b\") pod \"dnsmasq-dns-7bdd77c89-9b2ml\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.806478 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tx9\" (UniqueName: \"kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9\") pod \"dnsmasq-dns-6584b49599-pbt9k\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.886455 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:05 crc kubenswrapper[4988]: I1123 07:04:05.906090 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:06 crc kubenswrapper[4988]: I1123 07:04:06.354349 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:06 crc kubenswrapper[4988]: I1123 07:04:06.401407 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:06 crc kubenswrapper[4988]: W1123 07:04:06.405293 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbe0f1c_5830_4cf8_a99a_38b0f357ed00.slice/crio-3e182cfcad7f128784f7e2e244b0425583a3a96506827f8c0f89aed086193940 WatchSource:0}: Error finding container 3e182cfcad7f128784f7e2e244b0425583a3a96506827f8c0f89aed086193940: Status 404 returned error can't find the container with id 3e182cfcad7f128784f7e2e244b0425583a3a96506827f8c0f89aed086193940 Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.010719 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" event={"ID":"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0","Type":"ContainerStarted","Data":"0c5c37d9681bc8bc858b47a305d62d277fc1dbf4e96b88357674c1eba8e23a51"} Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.012726 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" event={"ID":"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00","Type":"ContainerStarted","Data":"3e182cfcad7f128784f7e2e244b0425583a3a96506827f8c0f89aed086193940"} Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.238493 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.257243 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.258577 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.297828 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.400715 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.400775 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2w85\" (UniqueName: \"kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.400834 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.502314 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.502389 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2w85\" (UniqueName: \"kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.502452 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.503587 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.504268 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.526550 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2w85\" (UniqueName: \"kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85\") pod \"dnsmasq-dns-6d8746976c-f2slh\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.614584 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:07 crc kubenswrapper[4988]: I1123 07:04:07.984545 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.027624 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.031025 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.049255 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.118063 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.118118 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.118150 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46zzm\" (UniqueName: \"kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.183540 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.220954 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.221005 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.221027 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zzm\" (UniqueName: \"kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.222120 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.222171 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.268125 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zzm\" (UniqueName: \"kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm\") pod \"dnsmasq-dns-6486446b9f-8nzmm\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.356025 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.402377 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.403661 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.406055 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.406369 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cfc76" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.406502 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.406506 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.407798 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.408019 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.408093 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.417412 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.531436 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.531801 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.531836 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.531864 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532014 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532120 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532214 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532281 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532330 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532355 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff4d8\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.532389 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642019 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642107 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642143 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642184 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642240 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642275 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642308 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642342 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642369 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642392 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff4d8\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642416 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.642757 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.643099 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.643775 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.644458 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.645129 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.646429 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.646437 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.649456 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.650796 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.660443 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.664554 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff4d8\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.678427 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.776742 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:04:08 crc kubenswrapper[4988]: I1123 07:04:08.870026 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:08 crc kubenswrapper[4988]: W1123 07:04:08.875671 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04ee59ec_cbd5_44ce_b1f9_e342d60f2108.slice/crio-012ac45f453e32c45432205c3482beae97c066f1268678f88f8f4552b2bcd2a0 WatchSource:0}: Error finding container 012ac45f453e32c45432205c3482beae97c066f1268678f88f8f4552b2bcd2a0: Status 404 returned error can't find the container with id 012ac45f453e32c45432205c3482beae97c066f1268678f88f8f4552b2bcd2a0 Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.068509 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" event={"ID":"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c","Type":"ContainerStarted","Data":"11181b027f7c3e2a64a14bdb3b0cf73415253801eed480e1d704569a7101f546"} Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.070207 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" event={"ID":"04ee59ec-cbd5-44ce-b1f9-e342d60f2108","Type":"ContainerStarted","Data":"012ac45f453e32c45432205c3482beae97c066f1268678f88f8f4552b2bcd2a0"} Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.145140 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.148480 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152154 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152473 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152586 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152521 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152717 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.152754 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tskqm" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.153039 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.162152 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251162 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251225 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251264 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251303 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251327 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251354 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251380 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251395 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251430 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgs4q\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251459 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.251476 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.265619 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352696 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352784 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352844 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352867 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352889 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352915 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352930 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352961 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgs4q\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.352992 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.353009 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.353055 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.355069 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.355071 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.355443 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.355570 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.355814 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.359357 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.365476 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.366041 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.369317 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.370316 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.387718 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgs4q\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.389260 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " pod="openstack/rabbitmq-server-0" Nov 23 07:04:09 crc kubenswrapper[4988]: I1123 07:04:09.478600 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.036401 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:04:10 crc kubenswrapper[4988]: W1123 07:04:10.072552 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b12d6f8_ea7a_4a60_b459_11563683791d.slice/crio-01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb WatchSource:0}: Error finding container 01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb: Status 404 returned error can't find the container with id 01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.103581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerStarted","Data":"e3ee18358f6c32f13a587c8df838c73b6175f58999a8a156ee52948d983da764"} Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.639420 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.642783 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.645802 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.672240 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7kvh5" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.673147 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.673435 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.673622 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.682051 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.788910 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.788961 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789009 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789046 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789091 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789116 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789156 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmhk\" (UniqueName: \"kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.789206 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.890865 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.890927 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.890973 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmhk\" (UniqueName: \"kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.891007 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.891043 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.891069 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.891105 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.891137 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.892038 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.893962 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.898659 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.899283 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.900049 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.900489 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.918926 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.919099 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmhk\" (UniqueName: \"kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:10 crc kubenswrapper[4988]: I1123 07:04:10.959280 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " pod="openstack/openstack-galera-0" Nov 23 07:04:11 crc kubenswrapper[4988]: I1123 07:04:10.989430 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:04:11 crc kubenswrapper[4988]: I1123 07:04:11.114542 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerStarted","Data":"01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb"} Nov 23 07:04:11 crc kubenswrapper[4988]: I1123 07:04:11.515629 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.006233 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.007571 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.009433 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-g6hh9" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.012215 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.012270 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.013228 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.018967 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112829 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112886 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2v84\" (UniqueName: \"kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112909 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112928 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112947 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.112975 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.113004 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.113027 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216292 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216350 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2v84\" (UniqueName: \"kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216370 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216389 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216406 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216816 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216862 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216894 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.216918 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.217766 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.219988 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.220629 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.221689 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.227664 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.246221 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2v84\" (UniqueName: \"kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.252926 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.279415 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.281178 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.282148 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.286229 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.287692 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.287891 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sdrlz" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.296249 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.334742 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.429418 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m52kk\" (UniqueName: \"kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.429495 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.429755 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.429790 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.429889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.531815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.532043 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m52kk\" (UniqueName: \"kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.532100 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.532131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.532162 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.534782 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.536000 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.536362 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.543185 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.555081 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m52kk\" (UniqueName: \"kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk\") pod \"memcached-0\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " pod="openstack/memcached-0" Nov 23 07:04:12 crc kubenswrapper[4988]: I1123 07:04:12.635001 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.640280 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.647790 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.650072 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.650533 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vzpsk" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.773188 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s84vh\" (UniqueName: \"kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh\") pod \"kube-state-metrics-0\" (UID: \"7e21c84c-8c43-417f-b4ec-90ce1f19594d\") " pod="openstack/kube-state-metrics-0" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.875434 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s84vh\" (UniqueName: \"kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh\") pod \"kube-state-metrics-0\" (UID: \"7e21c84c-8c43-417f-b4ec-90ce1f19594d\") " pod="openstack/kube-state-metrics-0" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.894245 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s84vh\" (UniqueName: \"kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh\") pod \"kube-state-metrics-0\" (UID: \"7e21c84c-8c43-417f-b4ec-90ce1f19594d\") " pod="openstack/kube-state-metrics-0" Nov 23 07:04:14 crc kubenswrapper[4988]: I1123 07:04:14.977161 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.308717 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.310876 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.314340 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jqvdf" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.314748 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.315056 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.315294 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.316653 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.337716 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.413846 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.413920 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.413978 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.414001 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.414160 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.414246 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthpx\" (UniqueName: \"kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.414509 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.414715 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516150 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516244 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516274 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516307 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516368 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516398 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516418 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthpx\" (UniqueName: \"kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.516540 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.517154 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.518309 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.518919 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.523335 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.523801 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.525325 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.541662 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthpx\" (UniqueName: \"kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.552370 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:17 crc kubenswrapper[4988]: I1123 07:04:17.652998 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.449352 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.450987 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.453912 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-s9lsb" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.457611 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.457738 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.461704 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.463805 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.471149 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.492300 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536156 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwj6f\" (UniqueName: \"kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536231 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536256 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536271 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536288 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536304 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536322 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536343 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536366 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536380 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536412 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnqj\" (UniqueName: \"kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536437 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.536455 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638317 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638369 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638410 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638440 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638469 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638495 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnqj\" (UniqueName: \"kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638581 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638611 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638663 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwj6f\" (UniqueName: \"kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638698 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638733 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.638754 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.639767 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.639897 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.640079 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.640104 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.641287 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.641824 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.641985 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.642053 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.642060 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.643618 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.644275 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.672796 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnqj\" (UniqueName: \"kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj\") pod \"ovn-controller-ovs-7xsjx\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.677743 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwj6f\" (UniqueName: \"kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f\") pod \"ovn-controller-zcfbn\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.770675 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:18 crc kubenswrapper[4988]: I1123 07:04:18.781368 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:20 crc kubenswrapper[4988]: W1123 07:04:20.217431 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe021496_c112_4578_bfe4_8639fa51480a.slice/crio-47b91f5e3569d48f4628c1cba860429e403d05a87989e729f284b37431b94832 WatchSource:0}: Error finding container 47b91f5e3569d48f4628c1cba860429e403d05a87989e729f284b37431b94832: Status 404 returned error can't find the container with id 47b91f5e3569d48f4628c1cba860429e403d05a87989e729f284b37431b94832 Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.220921 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerStarted","Data":"47b91f5e3569d48f4628c1cba860429e403d05a87989e729f284b37431b94832"} Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.550488 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.552081 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.555352 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.555723 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.556016 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-t42tv" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.556304 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.566983 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588354 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588453 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588484 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588525 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588559 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrj4\" (UniqueName: \"kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588582 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588663 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.588696 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.672551 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.672599 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689580 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689697 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689724 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689755 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689790 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrj4\" (UniqueName: \"kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689813 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689853 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.689880 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.690306 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.690506 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.690585 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.691452 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.697977 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.698318 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.705915 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.712459 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrj4\" (UniqueName: \"kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.716842 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:21 crc kubenswrapper[4988]: I1123 07:04:21.878103 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:28 crc kubenswrapper[4988]: E1123 07:04:28.288741 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 23 07:04:28 crc kubenswrapper[4988]: E1123 07:04:28.289525 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgs4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0b12d6f8-ea7a-4a60-b459-11563683791d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:04:28 crc kubenswrapper[4988]: E1123 07:04:28.290702 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" Nov 23 07:04:29 crc kubenswrapper[4988]: E1123 07:04:29.292073 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" Nov 23 07:04:33 crc kubenswrapper[4988]: I1123 07:04:33.218009 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.542493 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.543102 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46zzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-8nzmm_openstack(04ee59ec-cbd5-44ce-b1f9-e342d60f2108): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.544415 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.633131 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.633369 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8tx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-pbt9k_openstack(6cbe0f1c-5830-4cf8-a99a-38b0f357ed00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.634480 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" podUID="6cbe0f1c-5830-4cf8-a99a-38b0f357ed00" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.637821 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.638011 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dj9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-9b2ml_openstack(f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.639703 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" podUID="f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.665901 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.666036 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l2w85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d8746976c-f2slh_openstack(568dfaa6-ec62-4a8d-ad5d-8947417d0c1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:04:35 crc kubenswrapper[4988]: E1123 07:04:35.667431 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.039544 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 07:04:36 crc kubenswrapper[4988]: W1123 07:04:36.041693 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cb35e7c_c792_48c9_8f52_ac3e9cc283f6.slice/crio-1b3c8c140876172d79a9382a024018f0383865bdc978003bdcd13ce08b6ea01c WatchSource:0}: Error finding container 1b3c8c140876172d79a9382a024018f0383865bdc978003bdcd13ce08b6ea01c: Status 404 returned error can't find the container with id 1b3c8c140876172d79a9382a024018f0383865bdc978003bdcd13ce08b6ea01c Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.202626 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:04:36 crc kubenswrapper[4988]: W1123 07:04:36.211816 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e21c84c_8c43_417f_b4ec_90ce1f19594d.slice/crio-48058125d20fb72088aabfa9d5c68c0c6ae8c5112add6d668425c133a1c63db9 WatchSource:0}: Error finding container 48058125d20fb72088aabfa9d5c68c0c6ae8c5112add6d668425c133a1c63db9: Status 404 returned error can't find the container with id 48058125d20fb72088aabfa9d5c68c0c6ae8c5112add6d668425c133a1c63db9 Nov 23 07:04:36 crc kubenswrapper[4988]: W1123 07:04:36.212728 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdef8d22_1ecf_4086_9506_16378fd96db2.slice/crio-3a483ab684bc2be58ce75d04f3eea6398f678bb332be2bc009640594e15a7d2b WatchSource:0}: Error finding container 3a483ab684bc2be58ce75d04f3eea6398f678bb332be2bc009640594e15a7d2b: Status 404 returned error can't find the container with id 3a483ab684bc2be58ce75d04f3eea6398f678bb332be2bc009640594e15a7d2b Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.212866 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.285756 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:04:36 crc kubenswrapper[4988]: W1123 07:04:36.296631 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09c548ca_78f0_4e91_8a5d_dce756b0421e.slice/crio-fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22 WatchSource:0}: Error finding container fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22: Status 404 returned error can't find the container with id fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22 Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.336609 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerStarted","Data":"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747"} Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.338480 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerStarted","Data":"fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22"} Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.340214 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6","Type":"ContainerStarted","Data":"1b3c8c140876172d79a9382a024018f0383865bdc978003bdcd13ce08b6ea01c"} Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.358363 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.366084 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerStarted","Data":"d26b8143bec5fc36b678b5c84272c5c9a22b7d2e5f2a4cfc0d8b7ab07af17913"} Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.367903 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn" event={"ID":"cdef8d22-1ecf-4086-9506-16378fd96db2","Type":"ContainerStarted","Data":"3a483ab684bc2be58ce75d04f3eea6398f678bb332be2bc009640594e15a7d2b"} Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.369075 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7e21c84c-8c43-417f-b4ec-90ce1f19594d","Type":"ContainerStarted","Data":"48058125d20fb72088aabfa9d5c68c0c6ae8c5112add6d668425c133a1c63db9"} Nov 23 07:04:36 crc kubenswrapper[4988]: E1123 07:04:36.371866 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" Nov 23 07:04:36 crc kubenswrapper[4988]: E1123 07:04:36.372741 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.773888 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.826988 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.966464 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config\") pod \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.966518 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8tx9\" (UniqueName: \"kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9\") pod \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.966602 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b\") pod \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\" (UID: \"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0\") " Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.966642 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc\") pod \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.966772 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config\") pod \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\" (UID: \"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00\") " Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.967416 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config" (OuterVolumeSpecName: "config") pod "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00" (UID: "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.967437 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00" (UID: "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.967932 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config" (OuterVolumeSpecName: "config") pod "f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0" (UID: "f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.968634 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.968657 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.971600 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9" (OuterVolumeSpecName: "kube-api-access-x8tx9") pod "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00" (UID: "6cbe0f1c-5830-4cf8-a99a-38b0f357ed00"). InnerVolumeSpecName "kube-api-access-x8tx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[4988]: I1123 07:04:36.971696 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b" (OuterVolumeSpecName: "kube-api-access-2dj9b") pod "f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0" (UID: "f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0"). InnerVolumeSpecName "kube-api-access-2dj9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.070561 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.070602 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8tx9\" (UniqueName: \"kubernetes.io/projected/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00-kube-api-access-x8tx9\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.070618 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0-kube-api-access-2dj9b\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.367834 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.380397 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" event={"ID":"f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0","Type":"ContainerDied","Data":"0c5c37d9681bc8bc858b47a305d62d277fc1dbf4e96b88357674c1eba8e23a51"} Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.380453 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-9b2ml" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.381179 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerStarted","Data":"b085d4fb1ef64a900c73a1423110863ef3bde1a7f93da2bc209c52957335034c"} Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.382514 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerStarted","Data":"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a"} Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.384307 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.384313 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-pbt9k" event={"ID":"6cbe0f1c-5830-4cf8-a99a-38b0f357ed00","Type":"ContainerDied","Data":"3e182cfcad7f128784f7e2e244b0425583a3a96506827f8c0f89aed086193940"} Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.386426 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerStarted","Data":"6cab56cdeb602d70ff9e9c195a76c5f7dd21de9bcc59905fa9189998e4fff53d"} Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.479298 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.489786 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-pbt9k"] Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.507329 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:37 crc kubenswrapper[4988]: I1123 07:04:37.514718 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-9b2ml"] Nov 23 07:04:38 crc kubenswrapper[4988]: I1123 07:04:38.506492 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbe0f1c-5830-4cf8-a99a-38b0f357ed00" path="/var/lib/kubelet/pods/6cbe0f1c-5830-4cf8-a99a-38b0f357ed00/volumes" Nov 23 07:04:38 crc kubenswrapper[4988]: I1123 07:04:38.507158 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0" path="/var/lib/kubelet/pods/f9ee6660-2f0d-4b5d-99e8-eb15e6d77af0/volumes" Nov 23 07:04:39 crc kubenswrapper[4988]: W1123 07:04:39.626099 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod618fb238_2a5a_4265_9545_9ccbf016f855.slice/crio-6a44b8c53ec4907db963b6534027cc4bc118b9c92141b2bda62677827436fea8 WatchSource:0}: Error finding container 6a44b8c53ec4907db963b6534027cc4bc118b9c92141b2bda62677827436fea8: Status 404 returned error can't find the container with id 6a44b8c53ec4907db963b6534027cc4bc118b9c92141b2bda62677827436fea8 Nov 23 07:04:40 crc kubenswrapper[4988]: I1123 07:04:40.427134 4988 generic.go:334] "Generic (PLEG): container finished" podID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerID="6cab56cdeb602d70ff9e9c195a76c5f7dd21de9bcc59905fa9189998e4fff53d" exitCode=0 Nov 23 07:04:40 crc kubenswrapper[4988]: I1123 07:04:40.427336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerDied","Data":"6cab56cdeb602d70ff9e9c195a76c5f7dd21de9bcc59905fa9189998e4fff53d"} Nov 23 07:04:40 crc kubenswrapper[4988]: I1123 07:04:40.440557 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerStarted","Data":"6a44b8c53ec4907db963b6534027cc4bc118b9c92141b2bda62677827436fea8"} Nov 23 07:04:40 crc kubenswrapper[4988]: I1123 07:04:40.443740 4988 generic.go:334] "Generic (PLEG): container finished" podID="be021496-c112-4578-bfe4-8639fa51480a" containerID="7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747" exitCode=0 Nov 23 07:04:40 crc kubenswrapper[4988]: I1123 07:04:40.443779 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerDied","Data":"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.460688 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn" event={"ID":"cdef8d22-1ecf-4086-9506-16378fd96db2","Type":"ContainerStarted","Data":"02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.461279 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zcfbn" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.463320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7e21c84c-8c43-417f-b4ec-90ce1f19594d","Type":"ContainerStarted","Data":"f30db7695834aef0fb068006041a6bec320ddcc9a460440affee5b182c0e4bba"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.463767 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.465371 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerStarted","Data":"b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.468792 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerStarted","Data":"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.470965 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerStarted","Data":"2449f703f7311cf646d0edc484376bd64bc707c2506b087647d3f70f964b9a7b"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.472850 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6","Type":"ContainerStarted","Data":"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.473330 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.474686 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerStarted","Data":"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.476276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerStarted","Data":"ff17dfc095111f52510f65b355dd4871947190d74f7a84b768c2f07965a73d84"} Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.487667 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zcfbn" podStartSLOduration=19.938811946 podStartE2EDuration="24.487652241s" podCreationTimestamp="2025-11-23 07:04:18 +0000 UTC" firstStartedPulling="2025-11-23 07:04:36.214787876 +0000 UTC m=+1128.523300639" lastFinishedPulling="2025-11-23 07:04:40.763628171 +0000 UTC m=+1133.072140934" observedRunningTime="2025-11-23 07:04:42.484879693 +0000 UTC m=+1134.793392476" watchObservedRunningTime="2025-11-23 07:04:42.487652241 +0000 UTC m=+1134.796165004" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.514033 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.033235163 podStartE2EDuration="33.514015411s" podCreationTimestamp="2025-11-23 07:04:09 +0000 UTC" firstStartedPulling="2025-11-23 07:04:20.220497915 +0000 UTC m=+1112.529010678" lastFinishedPulling="2025-11-23 07:04:35.701278163 +0000 UTC m=+1128.009790926" observedRunningTime="2025-11-23 07:04:42.509416387 +0000 UTC m=+1134.817929150" watchObservedRunningTime="2025-11-23 07:04:42.514015411 +0000 UTC m=+1134.822528174" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.533602 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.926737749 podStartE2EDuration="30.533583502s" podCreationTimestamp="2025-11-23 07:04:12 +0000 UTC" firstStartedPulling="2025-11-23 07:04:36.043722154 +0000 UTC m=+1128.352234907" lastFinishedPulling="2025-11-23 07:04:40.650567877 +0000 UTC m=+1132.959080660" observedRunningTime="2025-11-23 07:04:42.532496496 +0000 UTC m=+1134.841009259" watchObservedRunningTime="2025-11-23 07:04:42.533583502 +0000 UTC m=+1134.842096265" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.551132 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.758807281 podStartE2EDuration="28.551107394s" podCreationTimestamp="2025-11-23 07:04:14 +0000 UTC" firstStartedPulling="2025-11-23 07:04:36.211257979 +0000 UTC m=+1128.519770752" lastFinishedPulling="2025-11-23 07:04:42.003558102 +0000 UTC m=+1134.312070865" observedRunningTime="2025-11-23 07:04:42.547165817 +0000 UTC m=+1134.855678580" watchObservedRunningTime="2025-11-23 07:04:42.551107394 +0000 UTC m=+1134.859620157" Nov 23 07:04:42 crc kubenswrapper[4988]: I1123 07:04:42.594863 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=32.594844401 podStartE2EDuration="32.594844401s" podCreationTimestamp="2025-11-23 07:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:42.592739009 +0000 UTC m=+1134.901251772" watchObservedRunningTime="2025-11-23 07:04:42.594844401 +0000 UTC m=+1134.903357164" Nov 23 07:04:43 crc kubenswrapper[4988]: I1123 07:04:43.484866 4988 generic.go:334] "Generic (PLEG): container finished" podID="618fb238-2a5a-4265-9545-9ccbf016f855" containerID="23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6" exitCode=0 Nov 23 07:04:43 crc kubenswrapper[4988]: I1123 07:04:43.484953 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerDied","Data":"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6"} Nov 23 07:04:44 crc kubenswrapper[4988]: I1123 07:04:44.519593 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerStarted","Data":"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47"} Nov 23 07:04:45 crc kubenswrapper[4988]: I1123 07:04:45.508220 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerStarted","Data":"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7"} Nov 23 07:04:45 crc kubenswrapper[4988]: I1123 07:04:45.508779 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:45 crc kubenswrapper[4988]: I1123 07:04:45.527910 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7xsjx" podStartSLOduration=25.991464689 podStartE2EDuration="27.52789071s" podCreationTimestamp="2025-11-23 07:04:18 +0000 UTC" firstStartedPulling="2025-11-23 07:04:39.639682256 +0000 UTC m=+1131.948195019" lastFinishedPulling="2025-11-23 07:04:41.176108277 +0000 UTC m=+1133.484621040" observedRunningTime="2025-11-23 07:04:45.527327776 +0000 UTC m=+1137.835840539" watchObservedRunningTime="2025-11-23 07:04:45.52789071 +0000 UTC m=+1137.836403473" Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.515581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerStarted","Data":"26570438aa5de5396fe7cefb58e572a2b24bad93573fe8afe37c0a4d296c6949"} Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.518120 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerStarted","Data":"68466f03cc012e80f4cfdb29fa67746fe3b1696571d0bc999ac0bb9f1d9506e5"} Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.522587 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerStarted","Data":"96a7d026dbf741ef763cbef53b41d738422ff89aae829eedde4d6c6dc76818d6"} Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.522642 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.577344 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.485872004 podStartE2EDuration="26.57732847s" podCreationTimestamp="2025-11-23 07:04:20 +0000 UTC" firstStartedPulling="2025-11-23 07:04:36.372381557 +0000 UTC m=+1128.680894320" lastFinishedPulling="2025-11-23 07:04:45.463838023 +0000 UTC m=+1137.772350786" observedRunningTime="2025-11-23 07:04:46.5740643 +0000 UTC m=+1138.882577123" watchObservedRunningTime="2025-11-23 07:04:46.57732847 +0000 UTC m=+1138.885841233" Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.598765 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.640618513 podStartE2EDuration="30.598745318s" podCreationTimestamp="2025-11-23 07:04:16 +0000 UTC" firstStartedPulling="2025-11-23 07:04:35.522889591 +0000 UTC m=+1127.831402364" lastFinishedPulling="2025-11-23 07:04:45.481016416 +0000 UTC m=+1137.789529169" observedRunningTime="2025-11-23 07:04:46.594340589 +0000 UTC m=+1138.902853362" watchObservedRunningTime="2025-11-23 07:04:46.598745318 +0000 UTC m=+1138.907258091" Nov 23 07:04:46 crc kubenswrapper[4988]: I1123 07:04:46.878613 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:47 crc kubenswrapper[4988]: I1123 07:04:47.636515 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 07:04:47 crc kubenswrapper[4988]: I1123 07:04:47.653097 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:47 crc kubenswrapper[4988]: I1123 07:04:47.653347 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:47 crc kubenswrapper[4988]: I1123 07:04:47.717321 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.549571 4988 generic.go:334] "Generic (PLEG): container finished" podID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerID="cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4" exitCode=0 Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.549830 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" event={"ID":"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c","Type":"ContainerDied","Data":"cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4"} Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.598380 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.853782 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.878894 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.880109 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.880185 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.892257 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.905796 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.920048 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.921076 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.925369 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.925704 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.948387 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.997378 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbpr\" (UniqueName: \"kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.997498 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.997723 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:48 crc kubenswrapper[4988]: I1123 07:04:48.997769 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.080041 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101563 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101609 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101653 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwmj\" (UniqueName: \"kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101706 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbpr\" (UniqueName: \"kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101749 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101823 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101856 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101879 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.101906 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.103353 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.105628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.108594 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.121774 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.123061 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.129283 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.136814 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.164094 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbpr\" (UniqueName: \"kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr\") pod \"dnsmasq-dns-6c65c5f57f-w9t7n\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.207127 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.208121 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.208214 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.208236 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.208287 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.208325 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwmj\" (UniqueName: \"kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.209252 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.209506 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.209861 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.210310 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.217848 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.224211 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwmj\" (UniqueName: \"kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj\") pod \"ovn-controller-metrics-rmdh8\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.224501 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.246547 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.309426 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s8l\" (UniqueName: \"kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.309505 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.309538 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.309585 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.309612 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.410979 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.411040 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.411089 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.411119 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.411157 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s8l\" (UniqueName: \"kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.412384 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.412439 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.412935 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.412984 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.434874 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s8l\" (UniqueName: \"kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l\") pod \"dnsmasq-dns-5c476d78c5-d54jk\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.483542 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.565736 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" event={"ID":"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c","Type":"ContainerStarted","Data":"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134"} Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.565876 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="dnsmasq-dns" containerID="cri-o://cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134" gracePeriod=10 Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.566155 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.571749 4988 generic.go:334] "Generic (PLEG): container finished" podID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" containerID="0bbd062a10ed1b0b11a546f525e2006de6fc39fde9ae3ae6592d7247754490e6" exitCode=0 Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.571900 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" event={"ID":"04ee59ec-cbd5-44ce-b1f9-e342d60f2108","Type":"ContainerDied","Data":"0bbd062a10ed1b0b11a546f525e2006de6fc39fde9ae3ae6592d7247754490e6"} Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.589668 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" podStartSLOduration=2.834946637 podStartE2EDuration="42.589641441s" podCreationTimestamp="2025-11-23 07:04:07 +0000 UTC" firstStartedPulling="2025-11-23 07:04:08.234341106 +0000 UTC m=+1100.542853869" lastFinishedPulling="2025-11-23 07:04:47.9890359 +0000 UTC m=+1140.297548673" observedRunningTime="2025-11-23 07:04:49.588608945 +0000 UTC m=+1141.897121728" watchObservedRunningTime="2025-11-23 07:04:49.589641441 +0000 UTC m=+1141.898154204" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.630438 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.783552 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.790128 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.794882 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nq98t" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.794902 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.795086 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.795220 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.806712 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.811862 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.898416 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:04:49 crc kubenswrapper[4988]: W1123 07:04:49.899231 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2afd4c0a_59f2_4313_a198_4e0e8255f163.slice/crio-02b550aa387b141edfbfa1c016e664f26bf90d025854ef06c103fa21d674e563 WatchSource:0}: Error finding container 02b550aa387b141edfbfa1c016e664f26bf90d025854ef06c103fa21d674e563: Status 404 returned error can't find the container with id 02b550aa387b141edfbfa1c016e664f26bf90d025854ef06c103fa21d674e563 Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.945522 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.945577 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq6nc\" (UniqueName: \"kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.945647 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.945691 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.945866 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.946917 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.947017 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:49 crc kubenswrapper[4988]: I1123 07:04:49.978696 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050414 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050527 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050592 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050619 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.050633 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq6nc\" (UniqueName: \"kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.053123 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.053534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.053663 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.056183 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.065318 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.066243 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.071253 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq6nc\" (UniqueName: \"kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc\") pod \"ovn-northd-0\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.086105 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:04:50 crc kubenswrapper[4988]: W1123 07:04:50.094790 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11409714_1e50_472e_bfdf_1d964d2b19b7.slice/crio-de0590f6bb50ff51a6939c255f23ca112f263cf254ab4e161da34133656fb079 WatchSource:0}: Error finding container de0590f6bb50ff51a6939c255f23ca112f263cf254ab4e161da34133656fb079: Status 404 returned error can't find the container with id de0590f6bb50ff51a6939c255f23ca112f263cf254ab4e161da34133656fb079 Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.114633 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.151769 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46zzm\" (UniqueName: \"kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm\") pod \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.151817 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config\") pod \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.151891 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc\") pod \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\" (UID: \"04ee59ec-cbd5-44ce-b1f9-e342d60f2108\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.162863 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm" (OuterVolumeSpecName: "kube-api-access-46zzm") pod "04ee59ec-cbd5-44ce-b1f9-e342d60f2108" (UID: "04ee59ec-cbd5-44ce-b1f9-e342d60f2108"). InnerVolumeSpecName "kube-api-access-46zzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.175185 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04ee59ec-cbd5-44ce-b1f9-e342d60f2108" (UID: "04ee59ec-cbd5-44ce-b1f9-e342d60f2108"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.175784 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config" (OuterVolumeSpecName: "config") pod "04ee59ec-cbd5-44ce-b1f9-e342d60f2108" (UID: "04ee59ec-cbd5-44ce-b1f9-e342d60f2108"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.254983 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46zzm\" (UniqueName: \"kubernetes.io/projected/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-kube-api-access-46zzm\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.255387 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.255400 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04ee59ec-cbd5-44ce-b1f9-e342d60f2108-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.290970 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.458335 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2w85\" (UniqueName: \"kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85\") pod \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.458403 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config\") pod \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.458466 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc\") pod \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\" (UID: \"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c\") " Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.466024 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85" (OuterVolumeSpecName: "kube-api-access-l2w85") pod "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" (UID: "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c"). InnerVolumeSpecName "kube-api-access-l2w85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.507455 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" (UID: "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.526601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config" (OuterVolumeSpecName: "config") pod "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" (UID: "568dfaa6-ec62-4a8d-ad5d-8947417d0c1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.549803 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.559806 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2w85\" (UniqueName: \"kubernetes.io/projected/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-kube-api-access-l2w85\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.559843 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.559852 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.580350 4988 generic.go:334] "Generic (PLEG): container finished" podID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerID="7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491" exitCode=0 Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.580416 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" event={"ID":"8b125fe4-eafa-4b2d-8d93-0c213022141b","Type":"ContainerDied","Data":"7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.580443 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" event={"ID":"8b125fe4-eafa-4b2d-8d93-0c213022141b","Type":"ContainerStarted","Data":"c7b6b17320d94bd046cb67299efbd8f1b942300bf5474e219438f9c17c57d2b3"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.582634 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rmdh8" event={"ID":"2afd4c0a-59f2-4313-a198-4e0e8255f163","Type":"ContainerStarted","Data":"ffefcfe13aa5e78958f29cba26a0887e5de7476e1ca25781589f55aaf4d027c2"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.582701 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rmdh8" event={"ID":"2afd4c0a-59f2-4313-a198-4e0e8255f163","Type":"ContainerStarted","Data":"02b550aa387b141edfbfa1c016e664f26bf90d025854ef06c103fa21d674e563"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.585820 4988 generic.go:334] "Generic (PLEG): container finished" podID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerID="cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134" exitCode=0 Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.585871 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" event={"ID":"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c","Type":"ContainerDied","Data":"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.585893 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" event={"ID":"568dfaa6-ec62-4a8d-ad5d-8947417d0c1c","Type":"ContainerDied","Data":"11181b027f7c3e2a64a14bdb3b0cf73415253801eed480e1d704569a7101f546"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.585910 4988 scope.go:117] "RemoveContainer" containerID="cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.586035 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-f2slh" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.589359 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" event={"ID":"04ee59ec-cbd5-44ce-b1f9-e342d60f2108","Type":"ContainerDied","Data":"012ac45f453e32c45432205c3482beae97c066f1268678f88f8f4552b2bcd2a0"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.589368 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8nzmm" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.591077 4988 generic.go:334] "Generic (PLEG): container finished" podID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerID="6593568c72220c295f4ad47c27573f6b5652fd63fbe417807eae865ffdb5712d" exitCode=0 Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.591115 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" event={"ID":"11409714-1e50-472e-bfdf-1d964d2b19b7","Type":"ContainerDied","Data":"6593568c72220c295f4ad47c27573f6b5652fd63fbe417807eae865ffdb5712d"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.591182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" event={"ID":"11409714-1e50-472e-bfdf-1d964d2b19b7","Type":"ContainerStarted","Data":"de0590f6bb50ff51a6939c255f23ca112f263cf254ab4e161da34133656fb079"} Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.618501 4988 scope.go:117] "RemoveContainer" containerID="cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4" Nov 23 07:04:50 crc kubenswrapper[4988]: W1123 07:04:50.618501 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e680706_1677_4f92_9957_9dd477bbc7be.slice/crio-1ab026f031267b78b55dac80f172695997d786926b8e2ebb91d7af6dd8e3c851 WatchSource:0}: Error finding container 1ab026f031267b78b55dac80f172695997d786926b8e2ebb91d7af6dd8e3c851: Status 404 returned error can't find the container with id 1ab026f031267b78b55dac80f172695997d786926b8e2ebb91d7af6dd8e3c851 Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.638341 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-rmdh8" podStartSLOduration=2.638319451 podStartE2EDuration="2.638319451s" podCreationTimestamp="2025-11-23 07:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:50.636593779 +0000 UTC m=+1142.945106552" watchObservedRunningTime="2025-11-23 07:04:50.638319451 +0000 UTC m=+1142.946832214" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.769168 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.774261 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8nzmm"] Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.778808 4988 scope.go:117] "RemoveContainer" containerID="cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134" Nov 23 07:04:50 crc kubenswrapper[4988]: E1123 07:04:50.780415 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134\": container with ID starting with cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134 not found: ID does not exist" containerID="cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.780478 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134"} err="failed to get container status \"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134\": rpc error: code = NotFound desc = could not find container \"cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134\": container with ID starting with cfd38fb73b89fdbb0f3e8fa57ff5b077c160235fdae74bd8ed9b29ef6e1a0134 not found: ID does not exist" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.780509 4988 scope.go:117] "RemoveContainer" containerID="cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.781721 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.787454 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-f2slh"] Nov 23 07:04:50 crc kubenswrapper[4988]: E1123 07:04:50.793208 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4\": container with ID starting with cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4 not found: ID does not exist" containerID="cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.793238 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4"} err="failed to get container status \"cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4\": rpc error: code = NotFound desc = could not find container \"cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4\": container with ID starting with cbdffd726d480ff6c8eed7ae2155fc6c13c3716317ed4b020d2106c91893c9e4 not found: ID does not exist" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.793264 4988 scope.go:117] "RemoveContainer" containerID="0bbd062a10ed1b0b11a546f525e2006de6fc39fde9ae3ae6592d7247754490e6" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.990984 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 07:04:50 crc kubenswrapper[4988]: I1123 07:04:50.991033 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.075080 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.603035 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" event={"ID":"11409714-1e50-472e-bfdf-1d964d2b19b7","Type":"ContainerStarted","Data":"0ef992738682b3a959cee68a283583f685c113b9e79cf0ab5d2cb93f3d6e09a5"} Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.604272 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.607263 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" event={"ID":"8b125fe4-eafa-4b2d-8d93-0c213022141b","Type":"ContainerStarted","Data":"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7"} Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.607547 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.610441 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerStarted","Data":"1ab026f031267b78b55dac80f172695997d786926b8e2ebb91d7af6dd8e3c851"} Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.637133 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" podStartSLOduration=2.637117164 podStartE2EDuration="2.637117164s" podCreationTimestamp="2025-11-23 07:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:51.630378448 +0000 UTC m=+1143.938891211" watchObservedRunningTime="2025-11-23 07:04:51.637117164 +0000 UTC m=+1143.945629927" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.672540 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.672597 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.672636 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.673270 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.673318 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1" gracePeriod=600 Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.717952 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 07:04:51 crc kubenswrapper[4988]: I1123 07:04:51.760261 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" podStartSLOduration=3.760235316 podStartE2EDuration="3.760235316s" podCreationTimestamp="2025-11-23 07:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:51.655293172 +0000 UTC m=+1143.963805935" watchObservedRunningTime="2025-11-23 07:04:51.760235316 +0000 UTC m=+1144.068748089" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.251672 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1c7a-account-create-sbk64"] Nov 23 07:04:52 crc kubenswrapper[4988]: E1123 07:04:52.252355 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="init" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.252371 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="init" Nov 23 07:04:52 crc kubenswrapper[4988]: E1123 07:04:52.252384 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="dnsmasq-dns" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.252391 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="dnsmasq-dns" Nov 23 07:04:52 crc kubenswrapper[4988]: E1123 07:04:52.252402 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" containerName="init" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.252408 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" containerName="init" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.252637 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" containerName="dnsmasq-dns" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.252666 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" containerName="init" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.253362 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.256561 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.260686 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1c7a-account-create-sbk64"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.295364 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jps4d"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.296576 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.326758 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jps4d"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.335748 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.335782 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.395050 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.395123 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.395646 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jrv\" (UniqueName: \"kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.395883 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcv97\" (UniqueName: \"kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.414499 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.498308 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8jrv\" (UniqueName: \"kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.498375 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcv97\" (UniqueName: \"kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.498401 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.498424 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.499379 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.500398 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.510811 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ee59ec-cbd5-44ce-b1f9-e342d60f2108" path="/var/lib/kubelet/pods/04ee59ec-cbd5-44ce-b1f9-e342d60f2108/volumes" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.511564 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568dfaa6-ec62-4a8d-ad5d-8947417d0c1c" path="/var/lib/kubelet/pods/568dfaa6-ec62-4a8d-ad5d-8947417d0c1c/volumes" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.512585 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nwt2p"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.513710 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.520362 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8jrv\" (UniqueName: \"kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv\") pod \"keystone-1c7a-account-create-sbk64\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.521710 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcv97\" (UniqueName: \"kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97\") pod \"keystone-db-create-jps4d\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.525601 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwt2p"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.570573 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.600164 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.600824 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtjm\" (UniqueName: \"kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.615782 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.627757 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2c17-account-create-hnw2d"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.629178 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.635423 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerStarted","Data":"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2"} Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.635641 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.649766 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1" exitCode=0 Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.650790 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1"} Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.650860 4988 scope.go:117] "RemoveContainer" containerID="34c85a0a4c08a90d6702cc03e588a3106590e0c8cfb5f38fb5c1e6482b5b2faf" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.652678 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2c17-account-create-hnw2d"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.701953 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.702024 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.702268 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjtjm\" (UniqueName: \"kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.702357 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf8mr\" (UniqueName: \"kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.705184 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.745132 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjtjm\" (UniqueName: \"kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm\") pod \"placement-db-create-nwt2p\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.779666 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.806090 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.806273 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf8mr\" (UniqueName: \"kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.807696 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.827456 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf8mr\" (UniqueName: \"kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr\") pod \"placement-2c17-account-create-hnw2d\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.931223 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jps4d"] Nov 23 07:04:52 crc kubenswrapper[4988]: I1123 07:04:52.977021 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1c7a-account-create-sbk64"] Nov 23 07:04:52 crc kubenswrapper[4988]: W1123 07:04:52.995125 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa351d3a_ce77_4c06_8139_c4cdc669b330.slice/crio-aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e WatchSource:0}: Error finding container aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e: Status 404 returned error can't find the container with id aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.044224 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.056011 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.557300 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nwt2p"] Nov 23 07:04:53 crc kubenswrapper[4988]: W1123 07:04:53.562797 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97ec8075_1751_43c6_877c_45747d783f30.slice/crio-16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad WatchSource:0}: Error finding container 16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad: Status 404 returned error can't find the container with id 16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.630827 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2c17-account-create-hnw2d"] Nov 23 07:04:53 crc kubenswrapper[4988]: W1123 07:04:53.646108 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ffda4e1_77d8_4d20_a473_ddb6030a3c40.slice/crio-4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d WatchSource:0}: Error finding container 4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d: Status 404 returned error can't find the container with id 4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.664516 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.676892 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerStarted","Data":"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.677282 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.698071 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c17-account-create-hnw2d" event={"ID":"5ffda4e1-77d8-4d20-a473-ddb6030a3c40","Type":"ContainerStarted","Data":"4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.712744 4988 generic.go:334] "Generic (PLEG): container finished" podID="979eb123-9af3-468e-8725-0dc8b8b2cb43" containerID="09b36958e7a38d8a2f724ab11c056ab39250d89750c4214925cacd2e136807f4" exitCode=0 Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.712873 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jps4d" event={"ID":"979eb123-9af3-468e-8725-0dc8b8b2cb43","Type":"ContainerDied","Data":"09b36958e7a38d8a2f724ab11c056ab39250d89750c4214925cacd2e136807f4"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.712902 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jps4d" event={"ID":"979eb123-9af3-468e-8725-0dc8b8b2cb43","Type":"ContainerStarted","Data":"50e90fa644a1f1c71e9906ccf38d08d333839705fc91b380753833a20cd441cf"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.716053 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwt2p" event={"ID":"97ec8075-1751-43c6-877c-45747d783f30","Type":"ContainerStarted","Data":"16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.721101 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.615030653 podStartE2EDuration="4.721082037s" podCreationTimestamp="2025-11-23 07:04:49 +0000 UTC" firstStartedPulling="2025-11-23 07:04:50.623487446 +0000 UTC m=+1142.932000209" lastFinishedPulling="2025-11-23 07:04:51.72953883 +0000 UTC m=+1144.038051593" observedRunningTime="2025-11-23 07:04:53.702578492 +0000 UTC m=+1146.011091265" watchObservedRunningTime="2025-11-23 07:04:53.721082037 +0000 UTC m=+1146.029594800" Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.724650 4988 generic.go:334] "Generic (PLEG): container finished" podID="aa351d3a-ce77-4c06-8139-c4cdc669b330" containerID="bec3e2825a178d1de93d5f5b8e2177dde9448344df03e6ee7fec9be9cc6caaf8" exitCode=0 Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.724813 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1c7a-account-create-sbk64" event={"ID":"aa351d3a-ce77-4c06-8139-c4cdc669b330","Type":"ContainerDied","Data":"bec3e2825a178d1de93d5f5b8e2177dde9448344df03e6ee7fec9be9cc6caaf8"} Nov 23 07:04:53 crc kubenswrapper[4988]: I1123 07:04:53.724861 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1c7a-account-create-sbk64" event={"ID":"aa351d3a-ce77-4c06-8139-c4cdc669b330","Type":"ContainerStarted","Data":"aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e"} Nov 23 07:04:54 crc kubenswrapper[4988]: I1123 07:04:54.737407 4988 generic.go:334] "Generic (PLEG): container finished" podID="5ffda4e1-77d8-4d20-a473-ddb6030a3c40" containerID="8cafe093f4c2b0f52906756ce4dcc579d9b0c31f9ae3dde6f9d8c8cd94230afc" exitCode=0 Nov 23 07:04:54 crc kubenswrapper[4988]: I1123 07:04:54.737496 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c17-account-create-hnw2d" event={"ID":"5ffda4e1-77d8-4d20-a473-ddb6030a3c40","Type":"ContainerDied","Data":"8cafe093f4c2b0f52906756ce4dcc579d9b0c31f9ae3dde6f9d8c8cd94230afc"} Nov 23 07:04:54 crc kubenswrapper[4988]: I1123 07:04:54.740293 4988 generic.go:334] "Generic (PLEG): container finished" podID="97ec8075-1751-43c6-877c-45747d783f30" containerID="122cf31e8ed2fa64dd2c7f0e176a961c7c365aa2f1e3596640dcafa8dea570a5" exitCode=0 Nov 23 07:04:54 crc kubenswrapper[4988]: I1123 07:04:54.740425 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwt2p" event={"ID":"97ec8075-1751-43c6-877c-45747d783f30","Type":"ContainerDied","Data":"122cf31e8ed2fa64dd2c7f0e176a961c7c365aa2f1e3596640dcafa8dea570a5"} Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.002556 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.051258 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.051771 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="dnsmasq-dns" containerID="cri-o://9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7" gracePeriod=10 Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.070396 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.104137 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.107669 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.113595 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150135 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts\") pod \"aa351d3a-ce77-4c06-8139-c4cdc669b330\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150174 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8jrv\" (UniqueName: \"kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv\") pod \"aa351d3a-ce77-4c06-8139-c4cdc669b330\" (UID: \"aa351d3a-ce77-4c06-8139-c4cdc669b330\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150572 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150615 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150648 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzvq\" (UniqueName: \"kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150683 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.150708 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.151236 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa351d3a-ce77-4c06-8139-c4cdc669b330" (UID: "aa351d3a-ce77-4c06-8139-c4cdc669b330"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.155253 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.170545 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv" (OuterVolumeSpecName: "kube-api-access-c8jrv") pod "aa351d3a-ce77-4c06-8139-c4cdc669b330" (UID: "aa351d3a-ce77-4c06-8139-c4cdc669b330"). InnerVolumeSpecName "kube-api-access-c8jrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254564 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254628 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254725 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254768 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzvq\" (UniqueName: \"kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254872 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa351d3a-ce77-4c06-8139-c4cdc669b330-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.254886 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8jrv\" (UniqueName: \"kubernetes.io/projected/aa351d3a-ce77-4c06-8139-c4cdc669b330-kube-api-access-c8jrv\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.256150 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.256917 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.260411 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.269434 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.279011 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzvq\" (UniqueName: \"kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq\") pod \"dnsmasq-dns-5c9fdb784c-qmc8p\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.330575 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.355931 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts\") pod \"979eb123-9af3-468e-8725-0dc8b8b2cb43\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.356147 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcv97\" (UniqueName: \"kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97\") pod \"979eb123-9af3-468e-8725-0dc8b8b2cb43\" (UID: \"979eb123-9af3-468e-8725-0dc8b8b2cb43\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.356544 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "979eb123-9af3-468e-8725-0dc8b8b2cb43" (UID: "979eb123-9af3-468e-8725-0dc8b8b2cb43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.357661 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979eb123-9af3-468e-8725-0dc8b8b2cb43-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.361871 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97" (OuterVolumeSpecName: "kube-api-access-mcv97") pod "979eb123-9af3-468e-8725-0dc8b8b2cb43" (UID: "979eb123-9af3-468e-8725-0dc8b8b2cb43"). InnerVolumeSpecName "kube-api-access-mcv97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.459564 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcv97\" (UniqueName: \"kubernetes.io/projected/979eb123-9af3-468e-8725-0dc8b8b2cb43-kube-api-access-mcv97\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.477432 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.546730 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.561070 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc\") pod \"8b125fe4-eafa-4b2d-8d93-0c213022141b\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.561144 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbpr\" (UniqueName: \"kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr\") pod \"8b125fe4-eafa-4b2d-8d93-0c213022141b\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.561253 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config\") pod \"8b125fe4-eafa-4b2d-8d93-0c213022141b\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.561295 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb\") pod \"8b125fe4-eafa-4b2d-8d93-0c213022141b\" (UID: \"8b125fe4-eafa-4b2d-8d93-0c213022141b\") " Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.567717 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr" (OuterVolumeSpecName: "kube-api-access-kgbpr") pod "8b125fe4-eafa-4b2d-8d93-0c213022141b" (UID: "8b125fe4-eafa-4b2d-8d93-0c213022141b"). InnerVolumeSpecName "kube-api-access-kgbpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.609555 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b125fe4-eafa-4b2d-8d93-0c213022141b" (UID: "8b125fe4-eafa-4b2d-8d93-0c213022141b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.626793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b125fe4-eafa-4b2d-8d93-0c213022141b" (UID: "8b125fe4-eafa-4b2d-8d93-0c213022141b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.655932 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config" (OuterVolumeSpecName: "config") pod "8b125fe4-eafa-4b2d-8d93-0c213022141b" (UID: "8b125fe4-eafa-4b2d-8d93-0c213022141b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.663486 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.663517 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.663532 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b125fe4-eafa-4b2d-8d93-0c213022141b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.663543 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbpr\" (UniqueName: \"kubernetes.io/projected/8b125fe4-eafa-4b2d-8d93-0c213022141b-kube-api-access-kgbpr\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.751084 4988 generic.go:334] "Generic (PLEG): container finished" podID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerID="9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7" exitCode=0 Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.751131 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.751152 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" event={"ID":"8b125fe4-eafa-4b2d-8d93-0c213022141b","Type":"ContainerDied","Data":"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7"} Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.751183 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-w9t7n" event={"ID":"8b125fe4-eafa-4b2d-8d93-0c213022141b","Type":"ContainerDied","Data":"c7b6b17320d94bd046cb67299efbd8f1b942300bf5474e219438f9c17c57d2b3"} Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.751211 4988 scope.go:117] "RemoveContainer" containerID="9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.754442 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jps4d" event={"ID":"979eb123-9af3-468e-8725-0dc8b8b2cb43","Type":"ContainerDied","Data":"50e90fa644a1f1c71e9906ccf38d08d333839705fc91b380753833a20cd441cf"} Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.754466 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e90fa644a1f1c71e9906ccf38d08d333839705fc91b380753833a20cd441cf" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.754509 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jps4d" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.759890 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1c7a-account-create-sbk64" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.768322 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1c7a-account-create-sbk64" event={"ID":"aa351d3a-ce77-4c06-8139-c4cdc669b330","Type":"ContainerDied","Data":"aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e"} Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.768395 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa9c69bc06aca196c8f7fb7d00bea7066141ea3ff5726cfb5156cde689ccf45e" Nov 23 07:04:55 crc kubenswrapper[4988]: I1123 07:04:55.777503 4988 scope.go:117] "RemoveContainer" containerID="7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.014463 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.181413 4988 scope.go:117] "RemoveContainer" containerID="9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.220741 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7\": container with ID starting with 9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7 not found: ID does not exist" containerID="9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.220788 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7"} err="failed to get container status \"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7\": rpc error: code = NotFound desc = could not find container \"9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7\": container with ID starting with 9711e26a125bd1f6fa1f42116e3afb92b292eb91a047a9181685bed35ed595b7 not found: ID does not exist" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.220815 4988 scope.go:117] "RemoveContainer" containerID="7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.223608 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491\": container with ID starting with 7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491 not found: ID does not exist" containerID="7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.223701 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491"} err="failed to get container status \"7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491\": rpc error: code = NotFound desc = could not find container \"7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491\": container with ID starting with 7f442a13c2ae545f5204510fa03070150582b6d4541805bf22646c6c7ace5491 not found: ID does not exist" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233298 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.233656 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979eb123-9af3-468e-8725-0dc8b8b2cb43" containerName="mariadb-database-create" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233668 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="979eb123-9af3-468e-8725-0dc8b8b2cb43" containerName="mariadb-database-create" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.233690 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="init" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233695 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="init" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.233706 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa351d3a-ce77-4c06-8139-c4cdc669b330" containerName="mariadb-account-create" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233712 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa351d3a-ce77-4c06-8139-c4cdc669b330" containerName="mariadb-account-create" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.233738 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="dnsmasq-dns" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233744 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="dnsmasq-dns" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233902 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" containerName="dnsmasq-dns" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233915 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa351d3a-ce77-4c06-8139-c4cdc669b330" containerName="mariadb-account-create" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.233924 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="979eb123-9af3-468e-8725-0dc8b8b2cb43" containerName="mariadb-database-create" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.246951 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.250039 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.251722 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.251739 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-srntd" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.251903 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.251926 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.284780 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.284831 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.284883 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.284902 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbtn\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.284941 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.315457 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.318946 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.328233 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-w9t7n"] Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.377703 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.388223 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts\") pod \"97ec8075-1751-43c6-877c-45747d783f30\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.388280 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf8mr\" (UniqueName: \"kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr\") pod \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.388426 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjtjm\" (UniqueName: \"kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm\") pod \"97ec8075-1751-43c6-877c-45747d783f30\" (UID: \"97ec8075-1751-43c6-877c-45747d783f30\") " Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.388494 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts\") pod \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\" (UID: \"5ffda4e1-77d8-4d20-a473-ddb6030a3c40\") " Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389389 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97ec8075-1751-43c6-877c-45747d783f30" (UID: "97ec8075-1751-43c6-877c-45747d783f30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389550 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389669 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389707 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbtn\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389779 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389847 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.389968 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97ec8075-1751-43c6-877c-45747d783f30-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.390105 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ffda4e1-77d8-4d20-a473-ddb6030a3c40" (UID: "5ffda4e1-77d8-4d20-a473-ddb6030a3c40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.390564 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.390613 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.390732 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.391042 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.391540 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:56.89152235 +0000 UTC m=+1149.200035113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.391637 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.391766 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr" (OuterVolumeSpecName: "kube-api-access-wf8mr") pod "5ffda4e1-77d8-4d20-a473-ddb6030a3c40" (UID: "5ffda4e1-77d8-4d20-a473-ddb6030a3c40"). InnerVolumeSpecName "kube-api-access-wf8mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.397672 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b125fe4_eafa_4b2d_8d93_0c213022141b.slice/crio-c7b6b17320d94bd046cb67299efbd8f1b942300bf5474e219438f9c17c57d2b3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa351d3a_ce77_4c06_8139_c4cdc669b330.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b125fe4_eafa_4b2d_8d93_0c213022141b.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.399604 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm" (OuterVolumeSpecName: "kube-api-access-mjtjm") pod "97ec8075-1751-43c6-877c-45747d783f30" (UID: "97ec8075-1751-43c6-877c-45747d783f30"). InnerVolumeSpecName "kube-api-access-mjtjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.415382 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbtn\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.430265 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.491601 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjtjm\" (UniqueName: \"kubernetes.io/projected/97ec8075-1751-43c6-877c-45747d783f30-kube-api-access-mjtjm\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.491643 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.491658 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf8mr\" (UniqueName: \"kubernetes.io/projected/5ffda4e1-77d8-4d20-a473-ddb6030a3c40-kube-api-access-wf8mr\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.507012 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b125fe4-eafa-4b2d-8d93-0c213022141b" path="/var/lib/kubelet/pods/8b125fe4-eafa-4b2d-8d93-0c213022141b/volumes" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.770131 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2c17-account-create-hnw2d" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.770127 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2c17-account-create-hnw2d" event={"ID":"5ffda4e1-77d8-4d20-a473-ddb6030a3c40","Type":"ContainerDied","Data":"4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d"} Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.771321 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c65211cf7e33aea4a972f2b4dad7521020c4c46b0815db241a0db886870aa6d" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.771878 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nwt2p" event={"ID":"97ec8075-1751-43c6-877c-45747d783f30","Type":"ContainerDied","Data":"16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad"} Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.772007 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b818dcb59ff44ddaa8c176801272ddd3250d2697833d175e9213f995a911ad" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.771919 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nwt2p" Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.773249 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerID="414185fd06129889e13c908eb78f1f2f2a9e102d8c0e598242683ad092ddda7a" exitCode=0 Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.773277 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" event={"ID":"f0b6f1aa-60d9-4998-b382-82840e0159f2","Type":"ContainerDied","Data":"414185fd06129889e13c908eb78f1f2f2a9e102d8c0e598242683ad092ddda7a"} Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.773294 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" event={"ID":"f0b6f1aa-60d9-4998-b382-82840e0159f2","Type":"ContainerStarted","Data":"5ccee52bdf5f61b6eb72112452e7e18a7e0f7bfb115a8bf5b36787cd2d48577d"} Nov 23 07:04:56 crc kubenswrapper[4988]: I1123 07:04:56.899974 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.900330 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.900357 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:04:56 crc kubenswrapper[4988]: E1123 07:04:56.900413 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:57.90039233 +0000 UTC m=+1150.208905083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.782813 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" event={"ID":"f0b6f1aa-60d9-4998-b382-82840e0159f2","Type":"ContainerStarted","Data":"d1a082c7b37805cdb7bdeeab367da02afd2f77a70f5b0a648cfea94f8b3d34ef"} Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.783112 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.804868 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" podStartSLOduration=2.80484691 podStartE2EDuration="2.80484691s" podCreationTimestamp="2025-11-23 07:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:57.799649812 +0000 UTC m=+1150.108162575" watchObservedRunningTime="2025-11-23 07:04:57.80484691 +0000 UTC m=+1150.113359683" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.820173 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rpqfv"] Nov 23 07:04:57 crc kubenswrapper[4988]: E1123 07:04:57.820580 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ffda4e1-77d8-4d20-a473-ddb6030a3c40" containerName="mariadb-account-create" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.820607 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffda4e1-77d8-4d20-a473-ddb6030a3c40" containerName="mariadb-account-create" Nov 23 07:04:57 crc kubenswrapper[4988]: E1123 07:04:57.820638 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ec8075-1751-43c6-877c-45747d783f30" containerName="mariadb-database-create" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.820648 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ec8075-1751-43c6-877c-45747d783f30" containerName="mariadb-database-create" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.820860 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ffda4e1-77d8-4d20-a473-ddb6030a3c40" containerName="mariadb-account-create" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.820888 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ec8075-1751-43c6-877c-45747d783f30" containerName="mariadb-database-create" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.821556 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.843752 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rpqfv"] Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.922281 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-23d0-account-create-dwcp7"] Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.923490 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.927009 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.929864 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9qn\" (UniqueName: \"kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.929907 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.930647 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:57 crc kubenswrapper[4988]: E1123 07:04:57.930802 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:04:57 crc kubenswrapper[4988]: E1123 07:04:57.930821 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:04:57 crc kubenswrapper[4988]: E1123 07:04:57.930867 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:59.930849932 +0000 UTC m=+1152.239362695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:04:57 crc kubenswrapper[4988]: I1123 07:04:57.932505 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-23d0-account-create-dwcp7"] Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.033037 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.033110 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkxb\" (UniqueName: \"kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.033412 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9qn\" (UniqueName: \"kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.033446 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.034091 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.059261 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9qn\" (UniqueName: \"kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn\") pod \"glance-db-create-rpqfv\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.134539 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.134591 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkxb\" (UniqueName: \"kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.135328 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.138668 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpqfv" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.163868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkxb\" (UniqueName: \"kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb\") pod \"glance-23d0-account-create-dwcp7\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.245286 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.686444 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rpqfv"] Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.789782 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-23d0-account-create-dwcp7"] Nov 23 07:04:58 crc kubenswrapper[4988]: I1123 07:04:58.799891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpqfv" event={"ID":"1f7b2124-7faf-4de1-ac89-15eeccc1abe7","Type":"ContainerStarted","Data":"88cc70124e0cd58c345d873dec3e729d74e2f926e42da4faccbc88ec777e4dce"} Nov 23 07:04:59 crc kubenswrapper[4988]: I1123 07:04:59.485356 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:04:59 crc kubenswrapper[4988]: I1123 07:04:59.807161 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23d0-account-create-dwcp7" event={"ID":"51f33e05-cf5d-4946-b57b-1cec9f01352b","Type":"ContainerStarted","Data":"2e064b9577e5e26862d368064d133013d05918f54b13c6ae9fbd44fb108d9183"} Nov 23 07:04:59 crc kubenswrapper[4988]: I1123 07:04:59.960789 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:04:59 crc kubenswrapper[4988]: E1123 07:04:59.960955 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:04:59 crc kubenswrapper[4988]: E1123 07:04:59.961202 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:04:59 crc kubenswrapper[4988]: E1123 07:04:59.961246 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:05:03.961231406 +0000 UTC m=+1156.269744169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.138171 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-c4zpx"] Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.139266 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.141303 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.141311 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.141680 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.152185 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c4zpx"] Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164380 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164623 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164729 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp7j\" (UniqueName: \"kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164811 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164890 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.164954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.165035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268152 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvp7j\" (UniqueName: \"kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268183 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268222 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268246 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268288 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.268519 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.269549 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.269960 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.269992 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.274838 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.274896 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.275299 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.290454 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvp7j\" (UniqueName: \"kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j\") pod \"swift-ring-rebalance-c4zpx\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.459890 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:00 crc kubenswrapper[4988]: I1123 07:05:00.876145 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-c4zpx"] Nov 23 07:05:00 crc kubenswrapper[4988]: W1123 07:05:00.881845 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9f85c9f_2478_4293_85cb_17eccd6f262c.slice/crio-904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385 WatchSource:0}: Error finding container 904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385: Status 404 returned error can't find the container with id 904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385 Nov 23 07:05:01 crc kubenswrapper[4988]: I1123 07:05:01.828060 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c4zpx" event={"ID":"d9f85c9f-2478-4293-85cb-17eccd6f262c","Type":"ContainerStarted","Data":"904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385"} Nov 23 07:05:01 crc kubenswrapper[4988]: I1123 07:05:01.829974 4988 generic.go:334] "Generic (PLEG): container finished" podID="51f33e05-cf5d-4946-b57b-1cec9f01352b" containerID="236952f66862d458af7dafe2a87ed0e4db0e53afe9f61e5deedd1956006ce87c" exitCode=0 Nov 23 07:05:01 crc kubenswrapper[4988]: I1123 07:05:01.830022 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23d0-account-create-dwcp7" event={"ID":"51f33e05-cf5d-4946-b57b-1cec9f01352b","Type":"ContainerDied","Data":"236952f66862d458af7dafe2a87ed0e4db0e53afe9f61e5deedd1956006ce87c"} Nov 23 07:05:01 crc kubenswrapper[4988]: I1123 07:05:01.832722 4988 generic.go:334] "Generic (PLEG): container finished" podID="1f7b2124-7faf-4de1-ac89-15eeccc1abe7" containerID="bc3c8ae80a13eff76ff5d7ab18132107c5fb43da4eae70ae1c79f0b4e86a1fea" exitCode=0 Nov 23 07:05:01 crc kubenswrapper[4988]: I1123 07:05:01.832749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpqfv" event={"ID":"1f7b2124-7faf-4de1-ac89-15eeccc1abe7","Type":"ContainerDied","Data":"bc3c8ae80a13eff76ff5d7ab18132107c5fb43da4eae70ae1c79f0b4e86a1fea"} Nov 23 07:05:04 crc kubenswrapper[4988]: E1123 07:05:04.042218 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:05:04 crc kubenswrapper[4988]: E1123 07:05:04.043017 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:05:04 crc kubenswrapper[4988]: E1123 07:05:04.043129 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:05:12.043097103 +0000 UTC m=+1164.351609886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.042002 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.068362 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.076520 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpqfv" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.144956 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9qn\" (UniqueName: \"kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn\") pod \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.145074 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts\") pod \"51f33e05-cf5d-4946-b57b-1cec9f01352b\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.145151 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pkxb\" (UniqueName: \"kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb\") pod \"51f33e05-cf5d-4946-b57b-1cec9f01352b\" (UID: \"51f33e05-cf5d-4946-b57b-1cec9f01352b\") " Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.145260 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts\") pod \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\" (UID: \"1f7b2124-7faf-4de1-ac89-15eeccc1abe7\") " Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.145993 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f7b2124-7faf-4de1-ac89-15eeccc1abe7" (UID: "1f7b2124-7faf-4de1-ac89-15eeccc1abe7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.145999 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51f33e05-cf5d-4946-b57b-1cec9f01352b" (UID: "51f33e05-cf5d-4946-b57b-1cec9f01352b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.152546 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb" (OuterVolumeSpecName: "kube-api-access-8pkxb") pod "51f33e05-cf5d-4946-b57b-1cec9f01352b" (UID: "51f33e05-cf5d-4946-b57b-1cec9f01352b"). InnerVolumeSpecName "kube-api-access-8pkxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.152625 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn" (OuterVolumeSpecName: "kube-api-access-mr9qn") pod "1f7b2124-7faf-4de1-ac89-15eeccc1abe7" (UID: "1f7b2124-7faf-4de1-ac89-15eeccc1abe7"). InnerVolumeSpecName "kube-api-access-mr9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.247161 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51f33e05-cf5d-4946-b57b-1cec9f01352b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.247248 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pkxb\" (UniqueName: \"kubernetes.io/projected/51f33e05-cf5d-4946-b57b-1cec9f01352b-kube-api-access-8pkxb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.247276 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.247291 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9qn\" (UniqueName: \"kubernetes.io/projected/1f7b2124-7faf-4de1-ac89-15eeccc1abe7-kube-api-access-mr9qn\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.865330 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-23d0-account-create-dwcp7" event={"ID":"51f33e05-cf5d-4946-b57b-1cec9f01352b","Type":"ContainerDied","Data":"2e064b9577e5e26862d368064d133013d05918f54b13c6ae9fbd44fb108d9183"} Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.865403 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e064b9577e5e26862d368064d133013d05918f54b13c6ae9fbd44fb108d9183" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.865357 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-23d0-account-create-dwcp7" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.868947 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rpqfv" event={"ID":"1f7b2124-7faf-4de1-ac89-15eeccc1abe7","Type":"ContainerDied","Data":"88cc70124e0cd58c345d873dec3e729d74e2f926e42da4faccbc88ec777e4dce"} Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.869003 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cc70124e0cd58c345d873dec3e729d74e2f926e42da4faccbc88ec777e4dce" Nov 23 07:05:04 crc kubenswrapper[4988]: I1123 07:05:04.869230 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rpqfv" Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.224490 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.479447 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.545393 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.545660 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="dnsmasq-dns" containerID="cri-o://0ef992738682b3a959cee68a283583f685c113b9e79cf0ab5d2cb93f3d6e09a5" gracePeriod=10 Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.878963 4988 generic.go:334] "Generic (PLEG): container finished" podID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerID="0ef992738682b3a959cee68a283583f685c113b9e79cf0ab5d2cb93f3d6e09a5" exitCode=0 Nov 23 07:05:05 crc kubenswrapper[4988]: I1123 07:05:05.879006 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" event={"ID":"11409714-1e50-472e-bfdf-1d964d2b19b7","Type":"ContainerDied","Data":"0ef992738682b3a959cee68a283583f685c113b9e79cf0ab5d2cb93f3d6e09a5"} Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.083095 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bprtz"] Nov 23 07:05:08 crc kubenswrapper[4988]: E1123 07:05:08.083738 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7b2124-7faf-4de1-ac89-15eeccc1abe7" containerName="mariadb-database-create" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.083752 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7b2124-7faf-4de1-ac89-15eeccc1abe7" containerName="mariadb-database-create" Nov 23 07:05:08 crc kubenswrapper[4988]: E1123 07:05:08.083771 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f33e05-cf5d-4946-b57b-1cec9f01352b" containerName="mariadb-account-create" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.083777 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f33e05-cf5d-4946-b57b-1cec9f01352b" containerName="mariadb-account-create" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.083930 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7b2124-7faf-4de1-ac89-15eeccc1abe7" containerName="mariadb-database-create" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.083953 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f33e05-cf5d-4946-b57b-1cec9f01352b" containerName="mariadb-account-create" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.084606 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.087006 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p2qgk" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.087791 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.103864 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bprtz"] Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.230223 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.230308 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.230425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.230500 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbq2x\" (UniqueName: \"kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.332033 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.332473 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.332671 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.332804 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbq2x\" (UniqueName: \"kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.338114 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.338609 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.338650 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.357496 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbq2x\" (UniqueName: \"kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x\") pod \"glance-db-sync-bprtz\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.405823 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.656504 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.843457 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb\") pod \"11409714-1e50-472e-bfdf-1d964d2b19b7\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.843737 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s8l\" (UniqueName: \"kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l\") pod \"11409714-1e50-472e-bfdf-1d964d2b19b7\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.843858 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config\") pod \"11409714-1e50-472e-bfdf-1d964d2b19b7\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.844046 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb\") pod \"11409714-1e50-472e-bfdf-1d964d2b19b7\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.854416 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc\") pod \"11409714-1e50-472e-bfdf-1d964d2b19b7\" (UID: \"11409714-1e50-472e-bfdf-1d964d2b19b7\") " Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.858619 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l" (OuterVolumeSpecName: "kube-api-access-w4s8l") pod "11409714-1e50-472e-bfdf-1d964d2b19b7" (UID: "11409714-1e50-472e-bfdf-1d964d2b19b7"). InnerVolumeSpecName "kube-api-access-w4s8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.908759 4988 generic.go:334] "Generic (PLEG): container finished" podID="692be1c8-4d8f-4676-89df-19f82b43f043" containerID="ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a" exitCode=0 Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.908833 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerDied","Data":"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a"} Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.915426 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" event={"ID":"11409714-1e50-472e-bfdf-1d964d2b19b7","Type":"ContainerDied","Data":"de0590f6bb50ff51a6939c255f23ca112f263cf254ab4e161da34133656fb079"} Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.915479 4988 scope.go:117] "RemoveContainer" containerID="0ef992738682b3a959cee68a283583f685c113b9e79cf0ab5d2cb93f3d6e09a5" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.915650 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-d54jk" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.921083 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11409714-1e50-472e-bfdf-1d964d2b19b7" (UID: "11409714-1e50-472e-bfdf-1d964d2b19b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.921651 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config" (OuterVolumeSpecName: "config") pod "11409714-1e50-472e-bfdf-1d964d2b19b7" (UID: "11409714-1e50-472e-bfdf-1d964d2b19b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.935317 4988 scope.go:117] "RemoveContainer" containerID="6593568c72220c295f4ad47c27573f6b5652fd63fbe417807eae865ffdb5712d" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.938528 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11409714-1e50-472e-bfdf-1d964d2b19b7" (UID: "11409714-1e50-472e-bfdf-1d964d2b19b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.948534 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11409714-1e50-472e-bfdf-1d964d2b19b7" (UID: "11409714-1e50-472e-bfdf-1d964d2b19b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.956405 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.956440 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4s8l\" (UniqueName: \"kubernetes.io/projected/11409714-1e50-472e-bfdf-1d964d2b19b7-kube-api-access-w4s8l\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.956454 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.956466 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:08 crc kubenswrapper[4988]: I1123 07:05:08.956477 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11409714-1e50-472e-bfdf-1d964d2b19b7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.006553 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bprtz"] Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.249553 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.256509 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-d54jk"] Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.930547 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerStarted","Data":"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e"} Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.935180 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.936479 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bprtz" event={"ID":"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61","Type":"ContainerStarted","Data":"6fd6dd4134909aaa5dad51578fc3ff634e8b0440231933a77e72e29fed453914"} Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.941914 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c4zpx" event={"ID":"d9f85c9f-2478-4293-85cb-17eccd6f262c","Type":"ContainerStarted","Data":"248903e7765cb842974618efa644386c4868c9ee7d086c147a4a6149357f2e64"} Nov 23 07:05:09 crc kubenswrapper[4988]: I1123 07:05:09.967268 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.581261509 podStartE2EDuration="1m2.967164139s" podCreationTimestamp="2025-11-23 07:04:07 +0000 UTC" firstStartedPulling="2025-11-23 07:04:09.279888979 +0000 UTC m=+1101.588401732" lastFinishedPulling="2025-11-23 07:04:35.665791599 +0000 UTC m=+1127.974304362" observedRunningTime="2025-11-23 07:05:09.961836827 +0000 UTC m=+1162.270349650" watchObservedRunningTime="2025-11-23 07:05:09.967164139 +0000 UTC m=+1162.275676932" Nov 23 07:05:10 crc kubenswrapper[4988]: I1123 07:05:10.017277 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-c4zpx" podStartSLOduration=2.455993475 podStartE2EDuration="10.017255842s" podCreationTimestamp="2025-11-23 07:05:00 +0000 UTC" firstStartedPulling="2025-11-23 07:05:00.884059919 +0000 UTC m=+1153.192572682" lastFinishedPulling="2025-11-23 07:05:08.445322286 +0000 UTC m=+1160.753835049" observedRunningTime="2025-11-23 07:05:10.017090438 +0000 UTC m=+1162.325603221" watchObservedRunningTime="2025-11-23 07:05:10.017255842 +0000 UTC m=+1162.325768615" Nov 23 07:05:10 crc kubenswrapper[4988]: I1123 07:05:10.505152 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" path="/var/lib/kubelet/pods/11409714-1e50-472e-bfdf-1d964d2b19b7/volumes" Nov 23 07:05:12 crc kubenswrapper[4988]: I1123 07:05:12.119491 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:05:12 crc kubenswrapper[4988]: E1123 07:05:12.119795 4988 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:05:12 crc kubenswrapper[4988]: E1123 07:05:12.119841 4988 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:05:12 crc kubenswrapper[4988]: E1123 07:05:12.119963 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift podName:fa95668c-09b0-4440-ab49-f1a1b29ebf64 nodeName:}" failed. No retries permitted until 2025-11-23 07:05:28.119928895 +0000 UTC m=+1180.428441698 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift") pod "swift-storage-0" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64") : configmap "swift-ring-files" not found Nov 23 07:05:13 crc kubenswrapper[4988]: I1123 07:05:13.833961 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:05:13 crc kubenswrapper[4988]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 07:05:13 crc kubenswrapper[4988]: > Nov 23 07:05:13 crc kubenswrapper[4988]: I1123 07:05:13.849514 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:05:18 crc kubenswrapper[4988]: I1123 07:05:18.846234 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:05:18 crc kubenswrapper[4988]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 07:05:18 crc kubenswrapper[4988]: > Nov 23 07:05:18 crc kubenswrapper[4988]: I1123 07:05:18.869935 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.094249 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zcfbn-config-dmqqv"] Nov 23 07:05:19 crc kubenswrapper[4988]: E1123 07:05:19.095228 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="init" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.095252 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="init" Nov 23 07:05:19 crc kubenswrapper[4988]: E1123 07:05:19.095269 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="dnsmasq-dns" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.095278 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="dnsmasq-dns" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.095749 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="11409714-1e50-472e-bfdf-1d964d2b19b7" containerName="dnsmasq-dns" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.096649 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.100118 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.126129 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zcfbn-config-dmqqv"] Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254605 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254679 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254735 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254786 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254873 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.254919 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smbtz\" (UniqueName: \"kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356120 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356178 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356216 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356305 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356331 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smbtz\" (UniqueName: \"kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356697 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356710 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.356754 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.357376 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.358469 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.374903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smbtz\" (UniqueName: \"kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz\") pod \"ovn-controller-zcfbn-config-dmqqv\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:19 crc kubenswrapper[4988]: I1123 07:05:19.422947 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:20 crc kubenswrapper[4988]: I1123 07:05:20.026914 4988 generic.go:334] "Generic (PLEG): container finished" podID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerID="26570438aa5de5396fe7cefb58e572a2b24bad93573fe8afe37c0a4d296c6949" exitCode=0 Nov 23 07:05:20 crc kubenswrapper[4988]: I1123 07:05:20.026975 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerDied","Data":"26570438aa5de5396fe7cefb58e572a2b24bad93573fe8afe37c0a4d296c6949"} Nov 23 07:05:23 crc kubenswrapper[4988]: I1123 07:05:23.804238 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:05:23 crc kubenswrapper[4988]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 07:05:23 crc kubenswrapper[4988]: > Nov 23 07:05:24 crc kubenswrapper[4988]: I1123 07:05:24.060685 4988 generic.go:334] "Generic (PLEG): container finished" podID="d9f85c9f-2478-4293-85cb-17eccd6f262c" containerID="248903e7765cb842974618efa644386c4868c9ee7d086c147a4a6149357f2e64" exitCode=0 Nov 23 07:05:24 crc kubenswrapper[4988]: I1123 07:05:24.060727 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c4zpx" event={"ID":"d9f85c9f-2478-4293-85cb-17eccd6f262c","Type":"ContainerDied","Data":"248903e7765cb842974618efa644386c4868c9ee7d086c147a4a6149357f2e64"} Nov 23 07:05:26 crc kubenswrapper[4988]: E1123 07:05:26.614904 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29" Nov 23 07:05:26 crc kubenswrapper[4988]: E1123 07:05:26.616022 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbq2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-bprtz_openstack(98aed9f5-ae61-4e6e-bd79-0dbc90fedf61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:05:26 crc kubenswrapper[4988]: E1123 07:05:26.617601 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-bprtz" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.731387 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.783728 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784051 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784104 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvp7j\" (UniqueName: \"kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784132 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784222 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784258 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.784315 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle\") pod \"d9f85c9f-2478-4293-85cb-17eccd6f262c\" (UID: \"d9f85c9f-2478-4293-85cb-17eccd6f262c\") " Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.785874 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.787333 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.792835 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j" (OuterVolumeSpecName: "kube-api-access-nvp7j") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "kube-api-access-nvp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.796106 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.818817 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.818926 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts" (OuterVolumeSpecName: "scripts") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.831187 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9f85c9f-2478-4293-85cb-17eccd6f262c" (UID: "d9f85c9f-2478-4293-85cb-17eccd6f262c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.886490 4988 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.886758 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.886852 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.886948 4988 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d9f85c9f-2478-4293-85cb-17eccd6f262c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.887028 4988 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d9f85c9f-2478-4293-85cb-17eccd6f262c-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.887142 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvp7j\" (UniqueName: \"kubernetes.io/projected/d9f85c9f-2478-4293-85cb-17eccd6f262c-kube-api-access-nvp7j\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:26 crc kubenswrapper[4988]: I1123 07:05:26.887311 4988 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d9f85c9f-2478-4293-85cb-17eccd6f262c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.038421 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zcfbn-config-dmqqv"] Nov 23 07:05:27 crc kubenswrapper[4988]: W1123 07:05:27.039120 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3592c3d0_1b2f_439d_8157_fa3724655664.slice/crio-3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1 WatchSource:0}: Error finding container 3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1: Status 404 returned error can't find the container with id 3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1 Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.083272 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-c4zpx" event={"ID":"d9f85c9f-2478-4293-85cb-17eccd6f262c","Type":"ContainerDied","Data":"904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385"} Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.083306 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="904bbafbe8a3d192ff2d78dbd5fab84ebac6af63a548c2d54a07c86851e9b385" Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.083478 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-c4zpx" Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.085774 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerStarted","Data":"901399c306cb37106a8d64f41934670193530a63fe14371cd50172d426d923d6"} Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.085970 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.089079 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn-config-dmqqv" event={"ID":"3592c3d0-1b2f-439d-8157-fa3724655664","Type":"ContainerStarted","Data":"3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1"} Nov 23 07:05:27 crc kubenswrapper[4988]: E1123 07:05:27.091049 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29\\\"\"" pod="openstack/glance-db-sync-bprtz" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" Nov 23 07:05:27 crc kubenswrapper[4988]: I1123 07:05:27.126118 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371957.728685 podStartE2EDuration="1m19.126089713s" podCreationTimestamp="2025-11-23 07:04:08 +0000 UTC" firstStartedPulling="2025-11-23 07:04:10.076683679 +0000 UTC m=+1102.385196442" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:05:27.111662273 +0000 UTC m=+1179.420175036" watchObservedRunningTime="2025-11-23 07:05:27.126089713 +0000 UTC m=+1179.434602506" Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.097899 4988 generic.go:334] "Generic (PLEG): container finished" podID="3592c3d0-1b2f-439d-8157-fa3724655664" containerID="7b846d3e72b93645c0f916a10993b0f93bb92fc366104b751731cee174aecd6f" exitCode=0 Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.098105 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn-config-dmqqv" event={"ID":"3592c3d0-1b2f-439d-8157-fa3724655664","Type":"ContainerDied","Data":"7b846d3e72b93645c0f916a10993b0f93bb92fc366104b751731cee174aecd6f"} Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.209095 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.216532 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"swift-storage-0\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " pod="openstack/swift-storage-0" Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.415428 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.781588 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:05:28 crc kubenswrapper[4988]: I1123 07:05:28.837867 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zcfbn" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.085805 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:05:29 crc kubenswrapper[4988]: W1123 07:05:29.100826 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa95668c_09b0_4440_ab49_f1a1b29ebf64.slice/crio-1225812bb325d956d4d318e786e4a0703285c095ba6c8fa7b9eb58b17fc9a7ee WatchSource:0}: Error finding container 1225812bb325d956d4d318e786e4a0703285c095ba6c8fa7b9eb58b17fc9a7ee: Status 404 returned error can't find the container with id 1225812bb325d956d4d318e786e4a0703285c095ba6c8fa7b9eb58b17fc9a7ee Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.398733 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436469 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436514 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436544 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smbtz\" (UniqueName: \"kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436602 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436669 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.436719 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts\") pod \"3592c3d0-1b2f-439d-8157-fa3724655664\" (UID: \"3592c3d0-1b2f-439d-8157-fa3724655664\") " Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.437670 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.437707 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.438261 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.438316 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run" (OuterVolumeSpecName: "var-run") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.438363 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts" (OuterVolumeSpecName: "scripts") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.444361 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz" (OuterVolumeSpecName: "kube-api-access-smbtz") pod "3592c3d0-1b2f-439d-8157-fa3724655664" (UID: "3592c3d0-1b2f-439d-8157-fa3724655664"). InnerVolumeSpecName "kube-api-access-smbtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538351 4988 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538385 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538394 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smbtz\" (UniqueName: \"kubernetes.io/projected/3592c3d0-1b2f-439d-8157-fa3724655664-kube-api-access-smbtz\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538407 4988 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538416 4988 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3592c3d0-1b2f-439d-8157-fa3724655664-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:29 crc kubenswrapper[4988]: I1123 07:05:29.538424 4988 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3592c3d0-1b2f-439d-8157-fa3724655664-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.111438 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn-config-dmqqv" event={"ID":"3592c3d0-1b2f-439d-8157-fa3724655664","Type":"ContainerDied","Data":"3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1"} Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.111672 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f6df1ad65cb9bf941bc586629de325fd94b2dbad413b5e7af4297e7d7e364c1" Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.111720 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn-config-dmqqv" Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.123297 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"1225812bb325d956d4d318e786e4a0703285c095ba6c8fa7b9eb58b17fc9a7ee"} Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.506424 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zcfbn-config-dmqqv"] Nov 23 07:05:30 crc kubenswrapper[4988]: I1123 07:05:30.506469 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zcfbn-config-dmqqv"] Nov 23 07:05:31 crc kubenswrapper[4988]: I1123 07:05:31.152478 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614"} Nov 23 07:05:31 crc kubenswrapper[4988]: I1123 07:05:31.152732 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26"} Nov 23 07:05:31 crc kubenswrapper[4988]: I1123 07:05:31.152742 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5"} Nov 23 07:05:32 crc kubenswrapper[4988]: I1123 07:05:32.164537 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3"} Nov 23 07:05:32 crc kubenswrapper[4988]: I1123 07:05:32.506722 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3592c3d0-1b2f-439d-8157-fa3724655664" path="/var/lib/kubelet/pods/3592c3d0-1b2f-439d-8157-fa3724655664/volumes" Nov 23 07:05:39 crc kubenswrapper[4988]: I1123 07:05:39.481794 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Nov 23 07:05:41 crc kubenswrapper[4988]: I1123 07:05:41.255821 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de"} Nov 23 07:05:41 crc kubenswrapper[4988]: I1123 07:05:41.256220 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732"} Nov 23 07:05:42 crc kubenswrapper[4988]: I1123 07:05:42.268528 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b"} Nov 23 07:05:42 crc kubenswrapper[4988]: I1123 07:05:42.268896 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc"} Nov 23 07:05:42 crc kubenswrapper[4988]: I1123 07:05:42.270393 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bprtz" event={"ID":"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61","Type":"ContainerStarted","Data":"f5db40ede098b182652c10d4de12c61dbf4706d68b094c3825e1fa0b35f1889a"} Nov 23 07:05:42 crc kubenswrapper[4988]: I1123 07:05:42.292416 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bprtz" podStartSLOduration=2.583746844 podStartE2EDuration="34.292392882s" podCreationTimestamp="2025-11-23 07:05:08 +0000 UTC" firstStartedPulling="2025-11-23 07:05:09.014001859 +0000 UTC m=+1161.322514632" lastFinishedPulling="2025-11-23 07:05:40.722647907 +0000 UTC m=+1193.031160670" observedRunningTime="2025-11-23 07:05:42.289002049 +0000 UTC m=+1194.597514862" watchObservedRunningTime="2025-11-23 07:05:42.292392882 +0000 UTC m=+1194.600905685" Nov 23 07:05:43 crc kubenswrapper[4988]: I1123 07:05:43.288377 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321123 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321462 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321479 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321489 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321500 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.321509 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerStarted","Data":"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c"} Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.368548 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.449627711 podStartE2EDuration="49.368529495s" podCreationTimestamp="2025-11-23 07:04:55 +0000 UTC" firstStartedPulling="2025-11-23 07:05:29.105694149 +0000 UTC m=+1181.414206912" lastFinishedPulling="2025-11-23 07:05:43.024595923 +0000 UTC m=+1195.333108696" observedRunningTime="2025-11-23 07:05:44.358349829 +0000 UTC m=+1196.666862612" watchObservedRunningTime="2025-11-23 07:05:44.368529495 +0000 UTC m=+1196.677042258" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.618239 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:44 crc kubenswrapper[4988]: E1123 07:05:44.618656 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3592c3d0-1b2f-439d-8157-fa3724655664" containerName="ovn-config" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.618679 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3592c3d0-1b2f-439d-8157-fa3724655664" containerName="ovn-config" Nov 23 07:05:44 crc kubenswrapper[4988]: E1123 07:05:44.618705 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f85c9f-2478-4293-85cb-17eccd6f262c" containerName="swift-ring-rebalance" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.618713 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f85c9f-2478-4293-85cb-17eccd6f262c" containerName="swift-ring-rebalance" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.618901 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f85c9f-2478-4293-85cb-17eccd6f262c" containerName="swift-ring-rebalance" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.618942 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3592c3d0-1b2f-439d-8157-fa3724655664" containerName="ovn-config" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.619895 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.622794 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.630579 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795227 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wg5h\" (UniqueName: \"kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795294 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795552 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795604 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795634 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.795707 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897607 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897674 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897702 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897735 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897802 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wg5h\" (UniqueName: \"kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.897835 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.898714 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.898720 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.898819 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.898888 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.899379 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.918477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wg5h\" (UniqueName: \"kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h\") pod \"dnsmasq-dns-56766df65f-s4kcr\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:44 crc kubenswrapper[4988]: I1123 07:05:44.935804 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:45 crc kubenswrapper[4988]: I1123 07:05:45.165094 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:45 crc kubenswrapper[4988]: W1123 07:05:45.169833 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc2b5c45_5065_49b8_ae8a_4b3429fb47aa.slice/crio-1adfe2aa31ba203e737e882c74be6915ef1ba16a5ee391c9a7f813905f893d43 WatchSource:0}: Error finding container 1adfe2aa31ba203e737e882c74be6915ef1ba16a5ee391c9a7f813905f893d43: Status 404 returned error can't find the container with id 1adfe2aa31ba203e737e882c74be6915ef1ba16a5ee391c9a7f813905f893d43 Nov 23 07:05:45 crc kubenswrapper[4988]: I1123 07:05:45.331917 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" event={"ID":"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa","Type":"ContainerStarted","Data":"1adfe2aa31ba203e737e882c74be6915ef1ba16a5ee391c9a7f813905f893d43"} Nov 23 07:05:46 crc kubenswrapper[4988]: I1123 07:05:46.343426 4988 generic.go:334] "Generic (PLEG): container finished" podID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerID="10d4b261f2e20d18ea1fa927515c44593fdc06c0bfafe2479e88af6dd0cb6c6e" exitCode=0 Nov 23 07:05:46 crc kubenswrapper[4988]: I1123 07:05:46.343577 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" event={"ID":"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa","Type":"ContainerDied","Data":"10d4b261f2e20d18ea1fa927515c44593fdc06c0bfafe2479e88af6dd0cb6c6e"} Nov 23 07:05:47 crc kubenswrapper[4988]: I1123 07:05:47.358099 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" event={"ID":"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa","Type":"ContainerStarted","Data":"71ae309adcc7b1a3c36b367a1d2d1f4a96d708b3b217205c9b0ec3f825bb0624"} Nov 23 07:05:47 crc kubenswrapper[4988]: I1123 07:05:47.358311 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:47 crc kubenswrapper[4988]: I1123 07:05:47.392385 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" podStartSLOduration=3.3923652029999998 podStartE2EDuration="3.392365203s" podCreationTimestamp="2025-11-23 07:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:05:47.389806271 +0000 UTC m=+1199.698319044" watchObservedRunningTime="2025-11-23 07:05:47.392365203 +0000 UTC m=+1199.700877976" Nov 23 07:05:48 crc kubenswrapper[4988]: I1123 07:05:48.370492 4988 generic.go:334] "Generic (PLEG): container finished" podID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" containerID="f5db40ede098b182652c10d4de12c61dbf4706d68b094c3825e1fa0b35f1889a" exitCode=0 Nov 23 07:05:48 crc kubenswrapper[4988]: I1123 07:05:48.370604 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bprtz" event={"ID":"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61","Type":"ContainerDied","Data":"f5db40ede098b182652c10d4de12c61dbf4706d68b094c3825e1fa0b35f1889a"} Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.481565 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.793691 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pp579"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.795100 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pp579" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.828445 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pp579"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.876688 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-crz5q"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.877636 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.899827 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.899937 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-crz5q"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.902023 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.902166 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmnc8\" (UniqueName: \"kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.906335 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-bfce-account-create-4qbgx"] Nov 23 07:05:49 crc kubenswrapper[4988]: E1123 07:05:49.906694 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" containerName="glance-db-sync" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.906712 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" containerName="glance-db-sync" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.906905 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" containerName="glance-db-sync" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.907431 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.915122 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.935414 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bfce-account-create-4qbgx"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.995344 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-80d1-account-create-zv6s7"] Nov 23 07:05:49 crc kubenswrapper[4988]: I1123 07:05:49.996333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.002910 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle\") pod \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003008 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbq2x\" (UniqueName: \"kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x\") pod \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003044 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data\") pod \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003100 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data\") pod \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\" (UID: \"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61\") " Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003394 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003428 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgnw\" (UniqueName: \"kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003461 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf5dg\" (UniqueName: \"kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmnc8\" (UniqueName: \"kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003501 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.003565 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.004255 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.066559 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.067425 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x" (OuterVolumeSpecName: "kube-api-access-fbq2x") pod "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" (UID: "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61"). InnerVolumeSpecName "kube-api-access-fbq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.071704 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" (UID: "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.078680 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" (UID: "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.089450 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-80d1-account-create-zv6s7"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.099314 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmnc8\" (UniqueName: \"kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8\") pod \"cinder-db-create-pp579\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " pod="openstack/cinder-db-create-pp579" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.105988 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58hb\" (UniqueName: \"kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107525 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107608 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107654 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgnw\" (UniqueName: \"kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf5dg\" (UniqueName: \"kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107744 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107942 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107955 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.107966 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbq2x\" (UniqueName: \"kubernetes.io/projected/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-kube-api-access-fbq2x\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.108700 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.108746 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.116402 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data" (OuterVolumeSpecName: "config-data") pod "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" (UID: "98aed9f5-ae61-4e6e-bd79-0dbc90fedf61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.144588 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgnw\" (UniqueName: \"kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw\") pod \"barbican-bfce-account-create-4qbgx\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.152466 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf5dg\" (UniqueName: \"kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg\") pod \"barbican-db-create-crz5q\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.160439 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rs4l6"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.162869 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.164683 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.164775 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.164835 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.165061 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zkrc" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.175139 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rs4l6"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.197728 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-h7wgh"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.198158 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pp579" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.199831 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58hb\" (UniqueName: \"kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209405 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209438 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209469 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209516 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.209560 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.210379 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.210557 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.223494 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.225267 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h7wgh"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.235483 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58hb\" (UniqueName: \"kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb\") pod \"cinder-80d1-account-create-zv6s7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.292484 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-21b3-account-create-p2lz4"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.300703 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.304359 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.311531 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.311577 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.311634 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.311675 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwmc5\" (UniqueName: \"kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.311697 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.316509 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.317811 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-21b3-account-create-p2lz4"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.318862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.346571 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp\") pod \"keystone-db-sync-rs4l6\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.404569 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bprtz" event={"ID":"98aed9f5-ae61-4e6e-bd79-0dbc90fedf61","Type":"ContainerDied","Data":"6fd6dd4134909aaa5dad51578fc3ff634e8b0440231933a77e72e29fed453914"} Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.404613 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd6dd4134909aaa5dad51578fc3ff634e8b0440231933a77e72e29fed453914" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.404690 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bprtz" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.412654 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwmc5\" (UniqueName: \"kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.412710 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.412730 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx62r\" (UniqueName: \"kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.412804 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.413518 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.446938 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwmc5\" (UniqueName: \"kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5\") pod \"neutron-db-create-h7wgh\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.497721 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.498116 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.516091 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.516130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx62r\" (UniqueName: \"kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.517154 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.537297 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx62r\" (UniqueName: \"kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r\") pod \"neutron-21b3-account-create-p2lz4\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.538802 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.623538 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.885912 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bfce-account-create-4qbgx"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.927808 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.928037 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="dnsmasq-dns" containerID="cri-o://71ae309adcc7b1a3c36b367a1d2d1f4a96d708b3b217205c9b0ec3f825bb0624" gracePeriod=10 Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.957941 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pp579"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.973946 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.980026 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:50 crc kubenswrapper[4988]: I1123 07:05:50.997030 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:05:51 crc kubenswrapper[4988]: W1123 07:05:51.001578 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd63ecb83_f85e_48fd_b8ab_0f7720422936.slice/crio-c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f WatchSource:0}: Error finding container c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f: Status 404 returned error can't find the container with id c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024552 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024609 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024649 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzn52\" (UniqueName: \"kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024704 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024726 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.024760 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:51 crc kubenswrapper[4988]: I1123 07:05:51.128153 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.126611 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.152652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzn52\" (UniqueName: \"kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.152816 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.152852 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.152932 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.152966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.153773 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.153838 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.154090 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.155263 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.179443 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzn52\" (UniqueName: \"kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52\") pod \"dnsmasq-dns-6856c564b9-2qd5t\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.182038 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.246885 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-crz5q"] Nov 23 07:05:52 crc kubenswrapper[4988]: W1123 07:05:51.251217 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod208b3820_2fbb_4e5b_bfa1_170b30f28af6.slice/crio-21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d WatchSource:0}: Error finding container 21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d: Status 404 returned error can't find the container with id 21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.320845 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-h7wgh"] Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.331092 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rs4l6"] Nov 23 07:05:52 crc kubenswrapper[4988]: W1123 07:05:51.345099 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a90861d_abe6_4af4_b8ae_ac44f7d1b748.slice/crio-77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3 WatchSource:0}: Error finding container 77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3: Status 404 returned error can't find the container with id 77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.417422 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crz5q" event={"ID":"208b3820-2fbb-4e5b-bfa1-170b30f28af6","Type":"ContainerStarted","Data":"21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.419996 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7wgh" event={"ID":"2a90861d-abe6-4af4-b8ae-ac44f7d1b748","Type":"ContainerStarted","Data":"77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.421998 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pp579" event={"ID":"d63ecb83-f85e-48fd-b8ab-0f7720422936","Type":"ContainerStarted","Data":"c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.424067 4988 generic.go:334] "Generic (PLEG): container finished" podID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerID="71ae309adcc7b1a3c36b367a1d2d1f4a96d708b3b217205c9b0ec3f825bb0624" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.424120 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" event={"ID":"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa","Type":"ContainerDied","Data":"71ae309adcc7b1a3c36b367a1d2d1f4a96d708b3b217205c9b0ec3f825bb0624"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.425299 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs4l6" event={"ID":"402d3c21-bc17-4659-8ed8-cc7bfece6d0a","Type":"ContainerStarted","Data":"2f9548471e1ea41d9cd763b2a0f3c4268da9137e7fe6b0bdbc5092c9860cd86c"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.426444 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bfce-account-create-4qbgx" event={"ID":"5dd80c51-5410-48ee-98da-4c6509b59e04","Type":"ContainerStarted","Data":"7559055fd2a379a7c8e479779ac08cbaa47217ff7c5e0fcb81d6ed7afd8c720d"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.426474 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bfce-account-create-4qbgx" event={"ID":"5dd80c51-5410-48ee-98da-4c6509b59e04","Type":"ContainerStarted","Data":"511b2901583b82d1d5897c817768a47975345eb6ccad1f7e4d71bd2c70dfdbbd"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.462664 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-21b3-account-create-p2lz4"] Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:51.487671 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-80d1-account-create-zv6s7"] Nov 23 07:05:52 crc kubenswrapper[4988]: W1123 07:05:51.521382 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e18c0cd_2b02_48f7_a9a9_8e7bacc665e7.slice/crio-b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446 WatchSource:0}: Error finding container b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446: Status 404 returned error can't find the container with id b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.317440 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.438477 4988 generic.go:334] "Generic (PLEG): container finished" podID="208b3820-2fbb-4e5b-bfa1-170b30f28af6" containerID="4e8c539cafce927fa0051ee822d18f425adf13d24727826db100e93d78fe2053" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.438571 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crz5q" event={"ID":"208b3820-2fbb-4e5b-bfa1-170b30f28af6","Type":"ContainerDied","Data":"4e8c539cafce927fa0051ee822d18f425adf13d24727826db100e93d78fe2053"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.439947 4988 generic.go:334] "Generic (PLEG): container finished" podID="2a90861d-abe6-4af4-b8ae-ac44f7d1b748" containerID="f52af94e44bc73b049668b4f770488d00f378865dd3f3636885a12163eaae0ff" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.439988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7wgh" event={"ID":"2a90861d-abe6-4af4-b8ae-ac44f7d1b748","Type":"ContainerDied","Data":"f52af94e44bc73b049668b4f770488d00f378865dd3f3636885a12163eaae0ff"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.448930 4988 generic.go:334] "Generic (PLEG): container finished" podID="d63ecb83-f85e-48fd-b8ab-0f7720422936" containerID="f11be766db4504f2e41951963bad97f99438e4e69daf034d7d2d0af098154be1" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.449029 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pp579" event={"ID":"d63ecb83-f85e-48fd-b8ab-0f7720422936","Type":"ContainerDied","Data":"f11be766db4504f2e41951963bad97f99438e4e69daf034d7d2d0af098154be1"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.456346 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" containerID="8143de56d9d4acbbde0e68754c31ea04f739ded08fe6ab5b6d541736e19f1585" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.456481 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-80d1-account-create-zv6s7" event={"ID":"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7","Type":"ContainerDied","Data":"8143de56d9d4acbbde0e68754c31ea04f739ded08fe6ab5b6d541736e19f1585"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.456507 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-80d1-account-create-zv6s7" event={"ID":"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7","Type":"ContainerStarted","Data":"b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.472040 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" event={"ID":"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa","Type":"ContainerDied","Data":"1adfe2aa31ba203e737e882c74be6915ef1ba16a5ee391c9a7f813905f893d43"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.472114 4988 scope.go:117] "RemoveContainer" containerID="71ae309adcc7b1a3c36b367a1d2d1f4a96d708b3b217205c9b0ec3f825bb0624" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.472262 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56766df65f-s4kcr" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.481273 4988 generic.go:334] "Generic (PLEG): container finished" podID="88aa4931-d135-4771-90fb-302c92874f9e" containerID="3a5bcd7c039d4e1e5b28798b930d0287d5c82a6c21ba4e1db683f177d84a0bea" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.481442 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-21b3-account-create-p2lz4" event={"ID":"88aa4931-d135-4771-90fb-302c92874f9e","Type":"ContainerDied","Data":"3a5bcd7c039d4e1e5b28798b930d0287d5c82a6c21ba4e1db683f177d84a0bea"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.481551 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-21b3-account-create-p2lz4" event={"ID":"88aa4931-d135-4771-90fb-302c92874f9e","Type":"ContainerStarted","Data":"4eb2b069120996d7d3fb7cc1862412b6758c52d7f3a62116fccc560e3e91756b"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.484717 4988 generic.go:334] "Generic (PLEG): container finished" podID="5dd80c51-5410-48ee-98da-4c6509b59e04" containerID="7559055fd2a379a7c8e479779ac08cbaa47217ff7c5e0fcb81d6ed7afd8c720d" exitCode=0 Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.484762 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bfce-account-create-4qbgx" event={"ID":"5dd80c51-5410-48ee-98da-4c6509b59e04","Type":"ContainerDied","Data":"7559055fd2a379a7c8e479779ac08cbaa47217ff7c5e0fcb81d6ed7afd8c720d"} Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.498308 4988 scope.go:117] "RemoveContainer" containerID="10d4b261f2e20d18ea1fa927515c44593fdc06c0bfafe2479e88af6dd0cb6c6e" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504009 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504168 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504245 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504276 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504358 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wg5h\" (UniqueName: \"kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.504386 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb\") pod \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\" (UID: \"dc2b5c45-5065-49b8-ae8a-4b3429fb47aa\") " Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.518933 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h" (OuterVolumeSpecName: "kube-api-access-9wg5h") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "kube-api-access-9wg5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.569323 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.579256 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config" (OuterVolumeSpecName: "config") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.579430 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.580054 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.594434 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" (UID: "dc2b5c45-5065-49b8-ae8a-4b3429fb47aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608246 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608282 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608292 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608303 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wg5h\" (UniqueName: \"kubernetes.io/projected/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-kube-api-access-9wg5h\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608314 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608321 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.608643 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.874332 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:52 crc kubenswrapper[4988]: I1123 07:05:52.888033 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56766df65f-s4kcr"] Nov 23 07:05:53 crc kubenswrapper[4988]: I1123 07:05:53.495319 4988 generic.go:334] "Generic (PLEG): container finished" podID="0c14320c-7643-4a13-a602-480b33302bea" containerID="430acce0c53d4a52d3e166b03a29b79391ae7a5aa1578887fa0af764f4878671" exitCode=0 Nov 23 07:05:53 crc kubenswrapper[4988]: I1123 07:05:53.495543 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" event={"ID":"0c14320c-7643-4a13-a602-480b33302bea","Type":"ContainerDied","Data":"430acce0c53d4a52d3e166b03a29b79391ae7a5aa1578887fa0af764f4878671"} Nov 23 07:05:53 crc kubenswrapper[4988]: I1123 07:05:53.495722 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" event={"ID":"0c14320c-7643-4a13-a602-480b33302bea","Type":"ContainerStarted","Data":"70f28edc084ff57aafd8284b47c81698c8401e2b1ace33041b1fac20bf5e753b"} Nov 23 07:05:54 crc kubenswrapper[4988]: I1123 07:05:54.514413 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" path="/var/lib/kubelet/pods/dc2b5c45-5065-49b8-ae8a-4b3429fb47aa/volumes" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.490698 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.503116 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pp579" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.506108 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.541690 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.552820 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bfce-account-create-4qbgx" event={"ID":"5dd80c51-5410-48ee-98da-4c6509b59e04","Type":"ContainerDied","Data":"511b2901583b82d1d5897c817768a47975345eb6ccad1f7e4d71bd2c70dfdbbd"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.552861 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511b2901583b82d1d5897c817768a47975345eb6ccad1f7e4d71bd2c70dfdbbd" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.552915 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bfce-account-create-4qbgx" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.554915 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.556182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crz5q" event={"ID":"208b3820-2fbb-4e5b-bfa1-170b30f28af6","Type":"ContainerDied","Data":"21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.556230 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b7e2e489d81e75d05bee91ddef500c57d26b0c89732aba6cffd84488cdbd5d" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.560314 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-h7wgh" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.560349 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-h7wgh" event={"ID":"2a90861d-abe6-4af4-b8ae-ac44f7d1b748","Type":"ContainerDied","Data":"77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.560398 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77c9b20ed37de7e0c07d1472372499a0ca6511844e7927bb6f8e1449a59d00b3" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.580719 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.580857 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pp579" event={"ID":"d63ecb83-f85e-48fd-b8ab-0f7720422936","Type":"ContainerDied","Data":"c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.580877 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4321be127fe229468b9405cdfd87f40d538698cc3678a17626969da6337df8f" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.580912 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pp579" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.583094 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-80d1-account-create-zv6s7" event={"ID":"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7","Type":"ContainerDied","Data":"b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.583129 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9dc5e257f58f06bb3b750c5df5c98c90c1d674f9245fa450ebdafe3e0654446" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.583174 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-80d1-account-create-zv6s7" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.589935 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-21b3-account-create-p2lz4" event={"ID":"88aa4931-d135-4771-90fb-302c92874f9e","Type":"ContainerDied","Data":"4eb2b069120996d7d3fb7cc1862412b6758c52d7f3a62116fccc560e3e91756b"} Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.589972 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb2b069120996d7d3fb7cc1862412b6758c52d7f3a62116fccc560e3e91756b" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.590023 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-21b3-account-create-p2lz4" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617311 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts\") pod \"5dd80c51-5410-48ee-98da-4c6509b59e04\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617366 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts\") pod \"88aa4931-d135-4771-90fb-302c92874f9e\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617422 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx62r\" (UniqueName: \"kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r\") pod \"88aa4931-d135-4771-90fb-302c92874f9e\" (UID: \"88aa4931-d135-4771-90fb-302c92874f9e\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617457 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts\") pod \"d63ecb83-f85e-48fd-b8ab-0f7720422936\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617514 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmnc8\" (UniqueName: \"kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8\") pod \"d63ecb83-f85e-48fd-b8ab-0f7720422936\" (UID: \"d63ecb83-f85e-48fd-b8ab-0f7720422936\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.617578 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltgnw\" (UniqueName: \"kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw\") pod \"5dd80c51-5410-48ee-98da-4c6509b59e04\" (UID: \"5dd80c51-5410-48ee-98da-4c6509b59e04\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.619742 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5dd80c51-5410-48ee-98da-4c6509b59e04" (UID: "5dd80c51-5410-48ee-98da-4c6509b59e04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.619910 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88aa4931-d135-4771-90fb-302c92874f9e" (UID: "88aa4931-d135-4771-90fb-302c92874f9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.620123 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d63ecb83-f85e-48fd-b8ab-0f7720422936" (UID: "d63ecb83-f85e-48fd-b8ab-0f7720422936"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.621346 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d63ecb83-f85e-48fd-b8ab-0f7720422936-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.621376 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5dd80c51-5410-48ee-98da-4c6509b59e04-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.621387 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88aa4931-d135-4771-90fb-302c92874f9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.622689 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r" (OuterVolumeSpecName: "kube-api-access-dx62r") pod "88aa4931-d135-4771-90fb-302c92874f9e" (UID: "88aa4931-d135-4771-90fb-302c92874f9e"). InnerVolumeSpecName "kube-api-access-dx62r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.628712 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8" (OuterVolumeSpecName: "kube-api-access-dmnc8") pod "d63ecb83-f85e-48fd-b8ab-0f7720422936" (UID: "d63ecb83-f85e-48fd-b8ab-0f7720422936"). InnerVolumeSpecName "kube-api-access-dmnc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.635263 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw" (OuterVolumeSpecName: "kube-api-access-ltgnw") pod "5dd80c51-5410-48ee-98da-4c6509b59e04" (UID: "5dd80c51-5410-48ee-98da-4c6509b59e04"). InnerVolumeSpecName "kube-api-access-ltgnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.723738 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf5dg\" (UniqueName: \"kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg\") pod \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.723859 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts\") pod \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.723917 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwmc5\" (UniqueName: \"kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5\") pod \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\" (UID: \"2a90861d-abe6-4af4-b8ae-ac44f7d1b748\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.723959 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts\") pod \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.723981 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts\") pod \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\" (UID: \"208b3820-2fbb-4e5b-bfa1-170b30f28af6\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.724013 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58hb\" (UniqueName: \"kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb\") pod \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\" (UID: \"5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7\") " Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.724351 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmnc8\" (UniqueName: \"kubernetes.io/projected/d63ecb83-f85e-48fd-b8ab-0f7720422936-kube-api-access-dmnc8\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.724366 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltgnw\" (UniqueName: \"kubernetes.io/projected/5dd80c51-5410-48ee-98da-4c6509b59e04-kube-api-access-ltgnw\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.724375 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx62r\" (UniqueName: \"kubernetes.io/projected/88aa4931-d135-4771-90fb-302c92874f9e-kube-api-access-dx62r\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.725649 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a90861d-abe6-4af4-b8ae-ac44f7d1b748" (UID: "2a90861d-abe6-4af4-b8ae-ac44f7d1b748"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.726038 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" (UID: "5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.726352 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "208b3820-2fbb-4e5b-bfa1-170b30f28af6" (UID: "208b3820-2fbb-4e5b-bfa1-170b30f28af6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.727748 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb" (OuterVolumeSpecName: "kube-api-access-n58hb") pod "5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" (UID: "5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7"). InnerVolumeSpecName "kube-api-access-n58hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.728142 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg" (OuterVolumeSpecName: "kube-api-access-qf5dg") pod "208b3820-2fbb-4e5b-bfa1-170b30f28af6" (UID: "208b3820-2fbb-4e5b-bfa1-170b30f28af6"). InnerVolumeSpecName "kube-api-access-qf5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.730028 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5" (OuterVolumeSpecName: "kube-api-access-rwmc5") pod "2a90861d-abe6-4af4-b8ae-ac44f7d1b748" (UID: "2a90861d-abe6-4af4-b8ae-ac44f7d1b748"). InnerVolumeSpecName "kube-api-access-rwmc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826085 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826385 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwmc5\" (UniqueName: \"kubernetes.io/projected/2a90861d-abe6-4af4-b8ae-ac44f7d1b748-kube-api-access-rwmc5\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826397 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826408 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/208b3820-2fbb-4e5b-bfa1-170b30f28af6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826417 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58hb\" (UniqueName: \"kubernetes.io/projected/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7-kube-api-access-n58hb\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:58 crc kubenswrapper[4988]: I1123 07:05:58.826425 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf5dg\" (UniqueName: \"kubernetes.io/projected/208b3820-2fbb-4e5b-bfa1-170b30f28af6-kube-api-access-qf5dg\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:59 crc kubenswrapper[4988]: I1123 07:05:59.605927 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crz5q" Nov 23 07:05:59 crc kubenswrapper[4988]: I1123 07:05:59.606101 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" event={"ID":"0c14320c-7643-4a13-a602-480b33302bea","Type":"ContainerStarted","Data":"6f1213cbef84e94a2f9d92cc1a077ad4277997f1c6fdffd56afccf27f21dc773"} Nov 23 07:05:59 crc kubenswrapper[4988]: I1123 07:05:59.637896 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podStartSLOduration=9.637882278 podStartE2EDuration="9.637882278s" podCreationTimestamp="2025-11-23 07:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:05:59.627759243 +0000 UTC m=+1211.936272006" watchObservedRunningTime="2025-11-23 07:05:59.637882278 +0000 UTC m=+1211.946395041" Nov 23 07:06:00 crc kubenswrapper[4988]: I1123 07:06:00.612945 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:06:01 crc kubenswrapper[4988]: I1123 07:06:01.629033 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs4l6" event={"ID":"402d3c21-bc17-4659-8ed8-cc7bfece6d0a","Type":"ContainerStarted","Data":"cca8075ca5605eaf238b8df0600c890e2ed6f61bb9da40f8c766eba7e805c422"} Nov 23 07:06:01 crc kubenswrapper[4988]: I1123 07:06:01.655554 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rs4l6" podStartSLOduration=2.758556434 podStartE2EDuration="11.655535166s" podCreationTimestamp="2025-11-23 07:05:50 +0000 UTC" firstStartedPulling="2025-11-23 07:05:51.368916003 +0000 UTC m=+1203.677428766" lastFinishedPulling="2025-11-23 07:06:00.265894725 +0000 UTC m=+1212.574407498" observedRunningTime="2025-11-23 07:06:01.650435212 +0000 UTC m=+1213.958947995" watchObservedRunningTime="2025-11-23 07:06:01.655535166 +0000 UTC m=+1213.964047939" Nov 23 07:06:06 crc kubenswrapper[4988]: I1123 07:06:06.184696 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:06:06 crc kubenswrapper[4988]: I1123 07:06:06.277537 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:06:06 crc kubenswrapper[4988]: I1123 07:06:06.277808 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="dnsmasq-dns" containerID="cri-o://d1a082c7b37805cdb7bdeeab367da02afd2f77a70f5b0a648cfea94f8b3d34ef" gracePeriod=10 Nov 23 07:06:06 crc kubenswrapper[4988]: I1123 07:06:06.682053 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerID="d1a082c7b37805cdb7bdeeab367da02afd2f77a70f5b0a648cfea94f8b3d34ef" exitCode=0 Nov 23 07:06:06 crc kubenswrapper[4988]: I1123 07:06:06.682231 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" event={"ID":"f0b6f1aa-60d9-4998-b382-82840e0159f2","Type":"ContainerDied","Data":"d1a082c7b37805cdb7bdeeab367da02afd2f77a70f5b0a648cfea94f8b3d34ef"} Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.000680 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.176659 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config\") pod \"f0b6f1aa-60d9-4998-b382-82840e0159f2\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.177099 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb\") pod \"f0b6f1aa-60d9-4998-b382-82840e0159f2\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.177212 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb\") pod \"f0b6f1aa-60d9-4998-b382-82840e0159f2\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.177269 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc\") pod \"f0b6f1aa-60d9-4998-b382-82840e0159f2\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.177298 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbzvq\" (UniqueName: \"kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq\") pod \"f0b6f1aa-60d9-4998-b382-82840e0159f2\" (UID: \"f0b6f1aa-60d9-4998-b382-82840e0159f2\") " Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.185454 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq" (OuterVolumeSpecName: "kube-api-access-gbzvq") pod "f0b6f1aa-60d9-4998-b382-82840e0159f2" (UID: "f0b6f1aa-60d9-4998-b382-82840e0159f2"). InnerVolumeSpecName "kube-api-access-gbzvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.223382 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0b6f1aa-60d9-4998-b382-82840e0159f2" (UID: "f0b6f1aa-60d9-4998-b382-82840e0159f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.228794 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config" (OuterVolumeSpecName: "config") pod "f0b6f1aa-60d9-4998-b382-82840e0159f2" (UID: "f0b6f1aa-60d9-4998-b382-82840e0159f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.232462 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0b6f1aa-60d9-4998-b382-82840e0159f2" (UID: "f0b6f1aa-60d9-4998-b382-82840e0159f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.249523 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0b6f1aa-60d9-4998-b382-82840e0159f2" (UID: "f0b6f1aa-60d9-4998-b382-82840e0159f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.279286 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.279316 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.279326 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbzvq\" (UniqueName: \"kubernetes.io/projected/f0b6f1aa-60d9-4998-b382-82840e0159f2-kube-api-access-gbzvq\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.279338 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.279347 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0b6f1aa-60d9-4998-b382-82840e0159f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.697557 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" event={"ID":"f0b6f1aa-60d9-4998-b382-82840e0159f2","Type":"ContainerDied","Data":"5ccee52bdf5f61b6eb72112452e7e18a7e0f7bfb115a8bf5b36787cd2d48577d"} Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.697668 4988 scope.go:117] "RemoveContainer" containerID="d1a082c7b37805cdb7bdeeab367da02afd2f77a70f5b0a648cfea94f8b3d34ef" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.697673 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-qmc8p" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.722587 4988 scope.go:117] "RemoveContainer" containerID="414185fd06129889e13c908eb78f1f2f2a9e102d8c0e598242683ad092ddda7a" Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.749463 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:06:07 crc kubenswrapper[4988]: I1123 07:06:07.756095 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-qmc8p"] Nov 23 07:06:08 crc kubenswrapper[4988]: I1123 07:06:08.517490 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" path="/var/lib/kubelet/pods/f0b6f1aa-60d9-4998-b382-82840e0159f2/volumes" Nov 23 07:06:09 crc kubenswrapper[4988]: I1123 07:06:09.726955 4988 generic.go:334] "Generic (PLEG): container finished" podID="402d3c21-bc17-4659-8ed8-cc7bfece6d0a" containerID="cca8075ca5605eaf238b8df0600c890e2ed6f61bb9da40f8c766eba7e805c422" exitCode=0 Nov 23 07:06:09 crc kubenswrapper[4988]: I1123 07:06:09.727105 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs4l6" event={"ID":"402d3c21-bc17-4659-8ed8-cc7bfece6d0a","Type":"ContainerDied","Data":"cca8075ca5605eaf238b8df0600c890e2ed6f61bb9da40f8c766eba7e805c422"} Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.100824 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.243956 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp\") pod \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.244051 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle\") pod \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.244225 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data\") pod \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\" (UID: \"402d3c21-bc17-4659-8ed8-cc7bfece6d0a\") " Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.266356 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp" (OuterVolumeSpecName: "kube-api-access-vfjlp") pod "402d3c21-bc17-4659-8ed8-cc7bfece6d0a" (UID: "402d3c21-bc17-4659-8ed8-cc7bfece6d0a"). InnerVolumeSpecName "kube-api-access-vfjlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.274579 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "402d3c21-bc17-4659-8ed8-cc7bfece6d0a" (UID: "402d3c21-bc17-4659-8ed8-cc7bfece6d0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.298826 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data" (OuterVolumeSpecName: "config-data") pod "402d3c21-bc17-4659-8ed8-cc7bfece6d0a" (UID: "402d3c21-bc17-4659-8ed8-cc7bfece6d0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.345863 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.345894 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfjlp\" (UniqueName: \"kubernetes.io/projected/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-kube-api-access-vfjlp\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.345905 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402d3c21-bc17-4659-8ed8-cc7bfece6d0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.755755 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rs4l6" event={"ID":"402d3c21-bc17-4659-8ed8-cc7bfece6d0a","Type":"ContainerDied","Data":"2f9548471e1ea41d9cd763b2a0f3c4268da9137e7fe6b0bdbc5092c9860cd86c"} Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.755802 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9548471e1ea41d9cd763b2a0f3c4268da9137e7fe6b0bdbc5092c9860cd86c" Nov 23 07:06:11 crc kubenswrapper[4988]: I1123 07:06:11.755872 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rs4l6" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.034031 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.034994 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="208b3820-2fbb-4e5b-bfa1-170b30f28af6" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035025 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="208b3820-2fbb-4e5b-bfa1-170b30f28af6" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035052 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="init" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035062 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="init" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035081 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="init" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035090 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="init" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035101 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035108 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035120 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402d3c21-bc17-4659-8ed8-cc7bfece6d0a" containerName="keystone-db-sync" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035128 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="402d3c21-bc17-4659-8ed8-cc7bfece6d0a" containerName="keystone-db-sync" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035143 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88aa4931-d135-4771-90fb-302c92874f9e" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035150 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="88aa4931-d135-4771-90fb-302c92874f9e" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035162 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63ecb83-f85e-48fd-b8ab-0f7720422936" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035171 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63ecb83-f85e-48fd-b8ab-0f7720422936" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035204 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd80c51-5410-48ee-98da-4c6509b59e04" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035211 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd80c51-5410-48ee-98da-4c6509b59e04" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035228 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035235 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035250 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a90861d-abe6-4af4-b8ae-ac44f7d1b748" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035275 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a90861d-abe6-4af4-b8ae-ac44f7d1b748" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: E1123 07:06:12.035291 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035299 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035496 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd80c51-5410-48ee-98da-4c6509b59e04" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035506 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a90861d-abe6-4af4-b8ae-ac44f7d1b748" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035517 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="88aa4931-d135-4771-90fb-302c92874f9e" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035525 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" containerName="mariadb-account-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035534 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="208b3820-2fbb-4e5b-bfa1-170b30f28af6" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035545 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b6f1aa-60d9-4998-b382-82840e0159f2" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035550 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc2b5c45-5065-49b8-ae8a-4b3429fb47aa" containerName="dnsmasq-dns" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035568 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="402d3c21-bc17-4659-8ed8-cc7bfece6d0a" containerName="keystone-db-sync" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.035577 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63ecb83-f85e-48fd-b8ab-0f7720422936" containerName="mariadb-database-create" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.036840 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.054273 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dwqg2"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.056028 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.061667 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.065504 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.065548 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.065689 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.065823 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.065953 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zkrc" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.111348 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dwqg2"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.159987 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc22t\" (UniqueName: \"kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160039 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160069 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160088 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160129 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160157 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwth\" (UniqueName: \"kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160218 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160269 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160290 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160312 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.160340 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.236729 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4mmc7"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.238333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.241027 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.241369 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.244627 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p7zwl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.250974 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mzqm8"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.252785 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.262124 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9wlpm" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.262267 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mmc7"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.262393 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263267 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263309 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwth\" (UniqueName: \"kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263368 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263469 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263510 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263545 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263584 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263627 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc22t\" (UniqueName: \"kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263660 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263703 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.263726 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.266737 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.267751 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.272790 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.272848 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.273522 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.274638 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.275542 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.275761 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.280619 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.295730 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.300294 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.303951 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mzqm8"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.319628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc22t\" (UniqueName: \"kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t\") pod \"keystone-bootstrap-dwqg2\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.367827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7x4z\" (UniqueName: \"kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.367888 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.367928 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbpt\" (UniqueName: \"kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.367962 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.368013 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.368049 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.368085 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.368122 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.368147 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.386009 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwth\" (UniqueName: \"kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth\") pod \"dnsmasq-dns-7dbf8bff67-xwlxl\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.392274 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.441271 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.444087 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.454940 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474545 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbpt\" (UniqueName: \"kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474632 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474693 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474737 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474787 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474838 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474891 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474939 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7x4z\" (UniqueName: \"kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.474981 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.476590 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.477693 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.504164 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.507267 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.507401 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.522682 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.528296 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.539594 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7x4z\" (UniqueName: \"kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z\") pod \"cinder-db-sync-4mmc7\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.539700 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbpt\" (UniqueName: \"kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.542310 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config\") pod \"neutron-db-sync-mzqm8\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.556969 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j9qhs"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.557813 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.557830 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9qhs"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.557892 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.562054 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-26h6n"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.583347 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.583721 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.585440 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.585565 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.585733 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.585882 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.585776 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7stg\" (UniqueName: \"kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.586305 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.586562 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.583965 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.588096 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b492k" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.588644 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v999q" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.591827 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.613993 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.615452 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.639093 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-26h6n"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.678058 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.688637 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694000 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694086 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694100 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm2h4\" (UniqueName: \"kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694118 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694136 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7stg\" (UniqueName: \"kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694153 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694188 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694233 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694261 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694312 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694345 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f555\" (UniqueName: \"kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694393 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694410 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.694866 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.696756 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.697698 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.706090 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.707732 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.711940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.714100 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.721414 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7stg\" (UniqueName: \"kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.726683 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data\") pod \"ceilometer-0\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.734178 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795721 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795779 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f555\" (UniqueName: \"kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795853 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795873 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795934 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795955 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795970 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.795986 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm2h4\" (UniqueName: \"kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.796005 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.796021 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv5br\" (UniqueName: \"kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.796064 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.796085 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.796103 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.801308 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.806602 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.806927 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.807911 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.808934 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.810952 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.828630 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm2h4\" (UniqueName: \"kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4\") pod \"placement-db-sync-j9qhs\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.828630 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f555\" (UniqueName: \"kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555\") pod \"barbican-db-sync-26h6n\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.897871 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.897930 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.897993 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.898016 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.898034 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv5br\" (UniqueName: \"kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.898096 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.899080 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.899589 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.900863 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.901667 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.902047 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.921534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv5br\" (UniqueName: \"kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br\") pod \"dnsmasq-dns-76c58b6d97-5mmkw\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.961800 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:06:12 crc kubenswrapper[4988]: I1123 07:06:12.989798 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.029080 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-26h6n" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.046968 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.074787 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dwqg2"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.184131 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.190756 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.194043 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.194344 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.195980 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.196003 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-p2qgk" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.208684 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.277151 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.307996 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308041 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9w7z\" (UniqueName: \"kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308085 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308103 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308179 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308223 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.308241 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.311415 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.320312 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.332274 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.332438 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.350454 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4mmc7"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.371366 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409111 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409177 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9w7z\" (UniqueName: \"kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409215 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409235 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409253 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409274 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409292 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8g9s\" (UniqueName: \"kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409339 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409377 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409409 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409431 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409446 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409465 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409503 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.409843 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.414226 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.418531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.431421 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.449711 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.486907 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.487268 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.504317 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9w7z\" (UniqueName: \"kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512127 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8g9s\" (UniqueName: \"kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512222 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512265 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512305 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512321 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512347 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.512378 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.517490 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.519464 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.520257 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.523815 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mzqm8"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.525670 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.526319 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.551011 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.551022 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.551814 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.565444 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.579126 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8g9s\" (UniqueName: \"kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s\") pod \"glance-default-internal-api-0\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.708072 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.742178 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.830749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dwqg2" event={"ID":"8a129187-0ddc-4d27-bd0f-5e5698dc4c63","Type":"ContainerStarted","Data":"5b405e8900a8c59b0ef9aab24e2657934adcae34a73d107c3812c2590dbd625d"} Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.839714 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" event={"ID":"047b20b3-21b2-40d4-8995-c285ac850fbf","Type":"ContainerStarted","Data":"0df50a6a4a79451eac93ecee9e98e6d5070cdbb3e01935d97d2f334ccbe68d47"} Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.839917 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j9qhs"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.867182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mzqm8" event={"ID":"8492667f-e261-4214-8c00-d2271167976e","Type":"ContainerStarted","Data":"98851ec32f8e01ba9e0357b0cbdee27f29bde1b96a8876c4a5e06a6cfbfbcf89"} Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.871700 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:06:13 crc kubenswrapper[4988]: I1123 07:06:13.878379 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mmc7" event={"ID":"95b8327c-fa6e-40b7-984e-c819d78da49b","Type":"ContainerStarted","Data":"6f3c79b0b29bb011d77ece8cdf88d600ed04ba1a9d9b8029f9e1363ef1d66f6b"} Nov 23 07:06:13 crc kubenswrapper[4988]: W1123 07:06:13.901902 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5e9b0b_3032_4132_9dbe_d12bd89466f0.slice/crio-67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1 WatchSource:0}: Error finding container 67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1: Status 404 returned error can't find the container with id 67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1 Nov 23 07:06:13 crc kubenswrapper[4988]: W1123 07:06:13.921781 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0cb22d8_be3d_440b_9316_3406be73f68b.slice/crio-760e76a77636fc038d2d34d45e799e954b047ced4e91e57c263e3380f09ce635 WatchSource:0}: Error finding container 760e76a77636fc038d2d34d45e799e954b047ced4e91e57c263e3380f09ce635: Status 404 returned error can't find the container with id 760e76a77636fc038d2d34d45e799e954b047ced4e91e57c263e3380f09ce635 Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.060567 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.078361 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-26h6n"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.224845 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.254099 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.355786 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.465991 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.558086 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.891734 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-26h6n" event={"ID":"5f2fc5f1-4980-49de-8ff7-981bb9f4966c","Type":"ContainerStarted","Data":"dea3886697a3e8e2f2be2af2c37d3343f1d7457a42afca8f50b96ca42e3c42d3"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.895548 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dwqg2" event={"ID":"8a129187-0ddc-4d27-bd0f-5e5698dc4c63","Type":"ContainerStarted","Data":"e747451ac1bf2e06bbd8441d5880d4818f7807bd9c7f5bc06bd3838c179ebeab"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.902803 4988 generic.go:334] "Generic (PLEG): container finished" podID="047b20b3-21b2-40d4-8995-c285ac850fbf" containerID="9012958ba40b7278672855a15c3d7381161e2bf76c601243ab9ce2c25bd0cfdd" exitCode=0 Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.902870 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" event={"ID":"047b20b3-21b2-40d4-8995-c285ac850fbf","Type":"ContainerDied","Data":"9012958ba40b7278672855a15c3d7381161e2bf76c601243ab9ce2c25bd0cfdd"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.908470 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mzqm8" event={"ID":"8492667f-e261-4214-8c00-d2271167976e","Type":"ContainerStarted","Data":"04443b6b592714f8daae5298a37741922db73aa1d1ed82ef92859ab5f2626a5d"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.922575 4988 generic.go:334] "Generic (PLEG): container finished" podID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerID="d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f" exitCode=0 Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.922700 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" event={"ID":"8290145a-df4b-4381-81d8-d2ce14d105fd","Type":"ContainerDied","Data":"d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.922733 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" event={"ID":"8290145a-df4b-4381-81d8-d2ce14d105fd","Type":"ContainerStarted","Data":"e2e7ddfa884db486417f376bfeeaf02c7ccc2ebf2596e86338486a64d6048707"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.925853 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dwqg2" podStartSLOduration=2.925836133 podStartE2EDuration="2.925836133s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:14.922633855 +0000 UTC m=+1227.231146628" watchObservedRunningTime="2025-11-23 07:06:14.925836133 +0000 UTC m=+1227.234348896" Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.935077 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9qhs" event={"ID":"bd5e9b0b-3032-4132-9dbe-d12bd89466f0","Type":"ContainerStarted","Data":"67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.937542 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerStarted","Data":"dbd13e7906839b8feeca520754ee0e55872f7e8db8cb4b6c22ad9837669de392"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.961431 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mzqm8" podStartSLOduration=2.961406644 podStartE2EDuration="2.961406644s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:14.957205833 +0000 UTC m=+1227.265718596" watchObservedRunningTime="2025-11-23 07:06:14.961406644 +0000 UTC m=+1227.269919407" Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.971758 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerStarted","Data":"760e76a77636fc038d2d34d45e799e954b047ced4e91e57c263e3380f09ce635"} Nov 23 07:06:14 crc kubenswrapper[4988]: I1123 07:06:14.975999 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerStarted","Data":"3a70355f71cfff8fcb4d82d82cafd4115e431720b8fbe122654237b7c118677e"} Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.285769 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.468392 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.468788 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.468828 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwth\" (UniqueName: \"kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.468849 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.468994 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.469040 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config\") pod \"047b20b3-21b2-40d4-8995-c285ac850fbf\" (UID: \"047b20b3-21b2-40d4-8995-c285ac850fbf\") " Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.492975 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.501798 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.502583 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth" (OuterVolumeSpecName: "kube-api-access-2hwth") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "kube-api-access-2hwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.509652 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.520312 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.521439 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config" (OuterVolumeSpecName: "config") pod "047b20b3-21b2-40d4-8995-c285ac850fbf" (UID: "047b20b3-21b2-40d4-8995-c285ac850fbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572917 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572950 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572962 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwth\" (UniqueName: \"kubernetes.io/projected/047b20b3-21b2-40d4-8995-c285ac850fbf-kube-api-access-2hwth\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572972 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572980 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.572989 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/047b20b3-21b2-40d4-8995-c285ac850fbf-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.987509 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" event={"ID":"047b20b3-21b2-40d4-8995-c285ac850fbf","Type":"ContainerDied","Data":"0df50a6a4a79451eac93ecee9e98e6d5070cdbb3e01935d97d2f334ccbe68d47"} Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.987564 4988 scope.go:117] "RemoveContainer" containerID="9012958ba40b7278672855a15c3d7381161e2bf76c601243ab9ce2c25bd0cfdd" Nov 23 07:06:15 crc kubenswrapper[4988]: I1123 07:06:15.987584 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-xwlxl" Nov 23 07:06:16 crc kubenswrapper[4988]: I1123 07:06:16.077705 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:16 crc kubenswrapper[4988]: I1123 07:06:16.083957 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-xwlxl"] Nov 23 07:06:16 crc kubenswrapper[4988]: I1123 07:06:16.507044 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047b20b3-21b2-40d4-8995-c285ac850fbf" path="/var/lib/kubelet/pods/047b20b3-21b2-40d4-8995-c285ac850fbf/volumes" Nov 23 07:06:16 crc kubenswrapper[4988]: I1123 07:06:16.996969 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerStarted","Data":"057b98944194b5d897f4ca8d8a97c5354739ca930fb94e035ae2e26b7dbf690a"} Nov 23 07:06:17 crc kubenswrapper[4988]: I1123 07:06:17.001065 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerStarted","Data":"d05bae519c2db642a8b2a4f35414eb0733d2f840329726d78ba9c70ef134259e"} Nov 23 07:06:17 crc kubenswrapper[4988]: I1123 07:06:17.003887 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" event={"ID":"8290145a-df4b-4381-81d8-d2ce14d105fd","Type":"ContainerStarted","Data":"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61"} Nov 23 07:06:17 crc kubenswrapper[4988]: I1123 07:06:17.004371 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:17 crc kubenswrapper[4988]: I1123 07:06:17.032668 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" podStartSLOduration=5.03265077 podStartE2EDuration="5.03265077s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:17.026006639 +0000 UTC m=+1229.334519402" watchObservedRunningTime="2025-11-23 07:06:17.03265077 +0000 UTC m=+1229.341163533" Nov 23 07:06:18 crc kubenswrapper[4988]: I1123 07:06:18.015603 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerStarted","Data":"f525908dbbe9431756fc9c59212f8b7ab848fdc59fddaf693323648fb806f013"} Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.028827 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerStarted","Data":"a87db98f40286a485b8ca65514aaf7d43dfb15da1b059a3a8e7dee0ffd0c4499"} Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.028885 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-log" containerID="cri-o://d05bae519c2db642a8b2a4f35414eb0733d2f840329726d78ba9c70ef134259e" gracePeriod=30 Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.028934 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-log" containerID="cri-o://057b98944194b5d897f4ca8d8a97c5354739ca930fb94e035ae2e26b7dbf690a" gracePeriod=30 Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.029021 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-httpd" containerID="cri-o://a87db98f40286a485b8ca65514aaf7d43dfb15da1b059a3a8e7dee0ffd0c4499" gracePeriod=30 Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.029077 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-httpd" containerID="cri-o://f525908dbbe9431756fc9c59212f8b7ab848fdc59fddaf693323648fb806f013" gracePeriod=30 Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.054213 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.054187437 podStartE2EDuration="7.054187437s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:19.051603435 +0000 UTC m=+1231.360116198" watchObservedRunningTime="2025-11-23 07:06:19.054187437 +0000 UTC m=+1231.362700200" Nov 23 07:06:19 crc kubenswrapper[4988]: I1123 07:06:19.075879 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.075861883 podStartE2EDuration="7.075861883s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:19.073300422 +0000 UTC m=+1231.381813185" watchObservedRunningTime="2025-11-23 07:06:19.075861883 +0000 UTC m=+1231.384374646" Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.040655 4988 generic.go:334] "Generic (PLEG): container finished" podID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerID="f525908dbbe9431756fc9c59212f8b7ab848fdc59fddaf693323648fb806f013" exitCode=0 Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.040937 4988 generic.go:334] "Generic (PLEG): container finished" podID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerID="d05bae519c2db642a8b2a4f35414eb0733d2f840329726d78ba9c70ef134259e" exitCode=143 Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.040993 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerDied","Data":"f525908dbbe9431756fc9c59212f8b7ab848fdc59fddaf693323648fb806f013"} Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.041023 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerDied","Data":"d05bae519c2db642a8b2a4f35414eb0733d2f840329726d78ba9c70ef134259e"} Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.044147 4988 generic.go:334] "Generic (PLEG): container finished" podID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerID="a87db98f40286a485b8ca65514aaf7d43dfb15da1b059a3a8e7dee0ffd0c4499" exitCode=0 Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.044173 4988 generic.go:334] "Generic (PLEG): container finished" podID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerID="057b98944194b5d897f4ca8d8a97c5354739ca930fb94e035ae2e26b7dbf690a" exitCode=143 Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.044207 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerDied","Data":"a87db98f40286a485b8ca65514aaf7d43dfb15da1b059a3a8e7dee0ffd0c4499"} Nov 23 07:06:20 crc kubenswrapper[4988]: I1123 07:06:20.044233 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerDied","Data":"057b98944194b5d897f4ca8d8a97c5354739ca930fb94e035ae2e26b7dbf690a"} Nov 23 07:06:21 crc kubenswrapper[4988]: I1123 07:06:21.058757 4988 generic.go:334] "Generic (PLEG): container finished" podID="8a129187-0ddc-4d27-bd0f-5e5698dc4c63" containerID="e747451ac1bf2e06bbd8441d5880d4818f7807bd9c7f5bc06bd3838c179ebeab" exitCode=0 Nov 23 07:06:21 crc kubenswrapper[4988]: I1123 07:06:21.058849 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dwqg2" event={"ID":"8a129187-0ddc-4d27-bd0f-5e5698dc4c63","Type":"ContainerDied","Data":"e747451ac1bf2e06bbd8441d5880d4818f7807bd9c7f5bc06bd3838c179ebeab"} Nov 23 07:06:23 crc kubenswrapper[4988]: I1123 07:06:23.049812 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:06:23 crc kubenswrapper[4988]: I1123 07:06:23.139583 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:06:23 crc kubenswrapper[4988]: I1123 07:06:23.139834 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" containerID="cri-o://6f1213cbef84e94a2f9d92cc1a077ad4277997f1c6fdffd56afccf27f21dc773" gracePeriod=10 Nov 23 07:06:24 crc kubenswrapper[4988]: I1123 07:06:24.121253 4988 generic.go:334] "Generic (PLEG): container finished" podID="0c14320c-7643-4a13-a602-480b33302bea" containerID="6f1213cbef84e94a2f9d92cc1a077ad4277997f1c6fdffd56afccf27f21dc773" exitCode=0 Nov 23 07:06:24 crc kubenswrapper[4988]: I1123 07:06:24.121465 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" event={"ID":"0c14320c-7643-4a13-a602-480b33302bea","Type":"ContainerDied","Data":"6f1213cbef84e94a2f9d92cc1a077ad4277997f1c6fdffd56afccf27f21dc773"} Nov 23 07:06:26 crc kubenswrapper[4988]: I1123 07:06:26.183402 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Nov 23 07:06:31 crc kubenswrapper[4988]: I1123 07:06:31.182916 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Nov 23 07:06:32 crc kubenswrapper[4988]: E1123 07:06:32.872256 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099" Nov 23 07:06:32 crc kubenswrapper[4988]: E1123 07:06:32.872709 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm2h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-j9qhs_openstack(bd5e9b0b-3032-4132-9dbe-d12bd89466f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:06:32 crc kubenswrapper[4988]: E1123 07:06:32.874541 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-j9qhs" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" Nov 23 07:06:32 crc kubenswrapper[4988]: I1123 07:06:32.994809 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.008734 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.123937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.124367 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125279 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9w7z\" (UniqueName: \"kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125348 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125414 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125461 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125490 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125539 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125568 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125803 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs\") pod \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\" (UID: \"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125825 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125848 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125874 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.125941 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc22t\" (UniqueName: \"kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t\") pod \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\" (UID: \"8a129187-0ddc-4d27-bd0f-5e5698dc4c63\") " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.126001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.126024 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs" (OuterVolumeSpecName: "logs") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.126351 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.126366 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.135572 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.136793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts" (OuterVolumeSpecName: "scripts") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.136806 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts" (OuterVolumeSpecName: "scripts") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.137733 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.141359 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.142592 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t" (OuterVolumeSpecName: "kube-api-access-zc22t") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "kube-api-access-zc22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.151735 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z" (OuterVolumeSpecName: "kube-api-access-q9w7z") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "kube-api-access-q9w7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.168591 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.213627 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data" (OuterVolumeSpecName: "config-data") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.222388 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a129187-0ddc-4d27-bd0f-5e5698dc4c63" (UID: "8a129187-0ddc-4d27-bd0f-5e5698dc4c63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230405 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230440 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9w7z\" (UniqueName: \"kubernetes.io/projected/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-kube-api-access-q9w7z\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230475 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230489 4988 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230500 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230512 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230522 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230532 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230542 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc22t\" (UniqueName: \"kubernetes.io/projected/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-kube-api-access-zc22t\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.230551 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a129187-0ddc-4d27-bd0f-5e5698dc4c63-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.232635 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"826b1387-6bd8-43cd-864f-a8a7f0d6f2f6","Type":"ContainerDied","Data":"dbd13e7906839b8feeca520754ee0e55872f7e8db8cb4b6c22ad9837669de392"} Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.232687 4988 scope.go:117] "RemoveContainer" containerID="f525908dbbe9431756fc9c59212f8b7ab848fdc59fddaf693323648fb806f013" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.232811 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.243268 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dwqg2" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.243378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dwqg2" event={"ID":"8a129187-0ddc-4d27-bd0f-5e5698dc4c63","Type":"ContainerDied","Data":"5b405e8900a8c59b0ef9aab24e2657934adcae34a73d107c3812c2590dbd625d"} Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.243414 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b405e8900a8c59b0ef9aab24e2657934adcae34a73d107c3812c2590dbd625d" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.244759 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: E1123 07:06:33.248013 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:7dd2e0dbb6bb5a6cecd1763e43479ca8cb6a0c502534e83c8795c0da2b50e099\\\"\"" pod="openstack/placement-db-sync-j9qhs" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.255459 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.254397 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data" (OuterVolumeSpecName: "config-data") pod "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" (UID: "826b1387-6bd8-43cd-864f-a8a7f0d6f2f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.332518 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.332553 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.332561 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.580758 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.585549 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.616520 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:33 crc kubenswrapper[4988]: E1123 07:06:33.616953 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047b20b3-21b2-40d4-8995-c285ac850fbf" containerName="init" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.616969 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="047b20b3-21b2-40d4-8995-c285ac850fbf" containerName="init" Nov 23 07:06:33 crc kubenswrapper[4988]: E1123 07:06:33.616980 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-httpd" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.616987 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-httpd" Nov 23 07:06:33 crc kubenswrapper[4988]: E1123 07:06:33.617006 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a129187-0ddc-4d27-bd0f-5e5698dc4c63" containerName="keystone-bootstrap" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617014 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a129187-0ddc-4d27-bd0f-5e5698dc4c63" containerName="keystone-bootstrap" Nov 23 07:06:33 crc kubenswrapper[4988]: E1123 07:06:33.617035 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-log" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617041 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-log" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617214 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="047b20b3-21b2-40d4-8995-c285ac850fbf" containerName="init" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617232 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-log" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617249 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a129187-0ddc-4d27-bd0f-5e5698dc4c63" containerName="keystone-bootstrap" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.617270 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" containerName="glance-httpd" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.618538 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.620206 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.621566 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.624316 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.740083 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2rr\" (UniqueName: \"kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.740452 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.740561 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.740691 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.740961 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.741066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.741165 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.741230 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.842807 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843135 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843169 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843205 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843236 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2rr\" (UniqueName: \"kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843256 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843282 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843301 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843332 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843921 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.843909 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.848177 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.848299 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.849040 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.851314 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.860938 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2rr\" (UniqueName: \"kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.871900 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " pod="openstack/glance-default-external-api-0" Nov 23 07:06:33 crc kubenswrapper[4988]: I1123 07:06:33.934772 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.197853 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dwqg2"] Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.206594 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dwqg2"] Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.309585 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-929sx"] Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.310734 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.317171 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.317291 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.317230 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.317456 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.317553 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zkrc" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.337063 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-929sx"] Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453255 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4277j\" (UniqueName: \"kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453297 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453338 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453374 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453412 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.453445 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.513705 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826b1387-6bd8-43cd-864f-a8a7f0d6f2f6" path="/var/lib/kubelet/pods/826b1387-6bd8-43cd-864f-a8a7f0d6f2f6/volumes" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.514791 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a129187-0ddc-4d27-bd0f-5e5698dc4c63" path="/var/lib/kubelet/pods/8a129187-0ddc-4d27-bd0f-5e5698dc4c63/volumes" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.554925 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4277j\" (UniqueName: \"kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.554974 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.555023 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.555144 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.555213 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.555906 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.560124 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.560731 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.562879 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.565465 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.578416 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.580119 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4277j\" (UniqueName: \"kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j\") pod \"keystone-bootstrap-929sx\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:34 crc kubenswrapper[4988]: I1123 07:06:34.631134 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:41 crc kubenswrapper[4988]: I1123 07:06:41.184099 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Nov 23 07:06:41 crc kubenswrapper[4988]: I1123 07:06:41.185007 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:06:43 crc kubenswrapper[4988]: I1123 07:06:43.743850 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:43 crc kubenswrapper[4988]: I1123 07:06:43.744305 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:43 crc kubenswrapper[4988]: E1123 07:06:43.894007 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645" Nov 23 07:06:43 crc kubenswrapper[4988]: E1123 07:06:43.894186 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f555,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-26h6n_openstack(5f2fc5f1-4980-49de-8ff7-981bb9f4966c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:06:43 crc kubenswrapper[4988]: E1123 07:06:43.895512 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-26h6n" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" Nov 23 07:06:43 crc kubenswrapper[4988]: I1123 07:06:43.982704 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:06:43 crc kubenswrapper[4988]: I1123 07:06:43.991744 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.127685 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.127748 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzn52\" (UniqueName: \"kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.127810 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.127876 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.127917 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128466 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8g9s\" (UniqueName: \"kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128528 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128560 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128579 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0\") pod \"0c14320c-7643-4a13-a602-480b33302bea\" (UID: \"0c14320c-7643-4a13-a602-480b33302bea\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128595 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128643 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128678 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128711 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.128744 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"85c01db1-b2c4-420a-b1d4-06eef61b0803\" (UID: \"85c01db1-b2c4-420a-b1d4-06eef61b0803\") " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.129508 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.129537 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs" (OuterVolumeSpecName: "logs") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.134906 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s" (OuterVolumeSpecName: "kube-api-access-m8g9s") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "kube-api-access-m8g9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.137935 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.137947 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts" (OuterVolumeSpecName: "scripts") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.140821 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52" (OuterVolumeSpecName: "kube-api-access-rzn52") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "kube-api-access-rzn52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.158063 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.178381 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.179018 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.182416 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.190135 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.194632 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data" (OuterVolumeSpecName: "config-data") pod "85c01db1-b2c4-420a-b1d4-06eef61b0803" (UID: "85c01db1-b2c4-420a-b1d4-06eef61b0803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.200556 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.207149 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config" (OuterVolumeSpecName: "config") pod "0c14320c-7643-4a13-a602-480b33302bea" (UID: "0c14320c-7643-4a13-a602-480b33302bea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.229958 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.229989 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8g9s\" (UniqueName: \"kubernetes.io/projected/85c01db1-b2c4-420a-b1d4-06eef61b0803-kube-api-access-m8g9s\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.229999 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230010 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230018 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230027 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230036 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230044 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230052 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/85c01db1-b2c4-420a-b1d4-06eef61b0803-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230084 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230092 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230101 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzn52\" (UniqueName: \"kubernetes.io/projected/0c14320c-7643-4a13-a602-480b33302bea-kube-api-access-rzn52\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230110 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c14320c-7643-4a13-a602-480b33302bea-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.230118 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85c01db1-b2c4-420a-b1d4-06eef61b0803-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.248529 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.331069 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.340911 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"85c01db1-b2c4-420a-b1d4-06eef61b0803","Type":"ContainerDied","Data":"3a70355f71cfff8fcb4d82d82cafd4115e431720b8fbe122654237b7c118677e"} Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.340926 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.345330 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" event={"ID":"0c14320c-7643-4a13-a602-480b33302bea","Type":"ContainerDied","Data":"70f28edc084ff57aafd8284b47c81698c8401e2b1ace33041b1fac20bf5e753b"} Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.345369 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" Nov 23 07:06:44 crc kubenswrapper[4988]: E1123 07:06:44.346369 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645\\\"\"" pod="openstack/barbican-db-sync-26h6n" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.432977 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.443480 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.452310 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.456946 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-2qd5t"] Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.465550 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:44 crc kubenswrapper[4988]: E1123 07:06:44.466032 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-httpd" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466059 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-httpd" Nov 23 07:06:44 crc kubenswrapper[4988]: E1123 07:06:44.466075 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-log" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466084 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-log" Nov 23 07:06:44 crc kubenswrapper[4988]: E1123 07:06:44.466107 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466115 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" Nov 23 07:06:44 crc kubenswrapper[4988]: E1123 07:06:44.466138 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="init" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466145 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="init" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466361 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466391 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-log" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.466401 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" containerName="glance-httpd" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.468575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.472592 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.474069 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.474569 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.510119 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c14320c-7643-4a13-a602-480b33302bea" path="/var/lib/kubelet/pods/0c14320c-7643-4a13-a602-480b33302bea/volumes" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.510862 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c01db1-b2c4-420a-b1d4-06eef61b0803" path="/var/lib/kubelet/pods/85c01db1-b2c4-420a-b1d4-06eef61b0803/volumes" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534160 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534287 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534341 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534367 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534413 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534439 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.534543 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs5p\" (UniqueName: \"kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.535226 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636637 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636735 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636790 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636816 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636901 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hs5p\" (UniqueName: \"kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636912 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636932 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.636961 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.637018 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.637677 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.637774 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.643866 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.647379 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.655136 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.655428 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.658617 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hs5p\" (UniqueName: \"kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.665964 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:06:44 crc kubenswrapper[4988]: I1123 07:06:44.789603 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.190984 4988 scope.go:117] "RemoveContainer" containerID="d05bae519c2db642a8b2a4f35414eb0733d2f840329726d78ba9c70ef134259e" Nov 23 07:06:45 crc kubenswrapper[4988]: E1123 07:06:45.201475 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 23 07:06:45 crc kubenswrapper[4988]: E1123 07:06:45.201661 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7x4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4mmc7_openstack(95b8327c-fa6e-40b7-984e-c819d78da49b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:06:45 crc kubenswrapper[4988]: E1123 07:06:45.203097 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4mmc7" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" Nov 23 07:06:45 crc kubenswrapper[4988]: E1123 07:06:45.367533 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-4mmc7" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.367782 4988 scope.go:117] "RemoveContainer" containerID="a87db98f40286a485b8ca65514aaf7d43dfb15da1b059a3a8e7dee0ffd0c4499" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.397355 4988 scope.go:117] "RemoveContainer" containerID="057b98944194b5d897f4ca8d8a97c5354739ca930fb94e035ae2e26b7dbf690a" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.420010 4988 scope.go:117] "RemoveContainer" containerID="6f1213cbef84e94a2f9d92cc1a077ad4277997f1c6fdffd56afccf27f21dc773" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.437414 4988 scope.go:117] "RemoveContainer" containerID="430acce0c53d4a52d3e166b03a29b79391ae7a5aa1578887fa0af764f4878671" Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.730079 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:06:45 crc kubenswrapper[4988]: W1123 07:06:45.737664 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb20a898b_0c5b_4b53_a550_246abf8f6d8a.slice/crio-0f20640f52809a161549efb74a91ca3c768dbb3c3934d7898499e04bcb7da5f1 WatchSource:0}: Error finding container 0f20640f52809a161549efb74a91ca3c768dbb3c3934d7898499e04bcb7da5f1: Status 404 returned error can't find the container with id 0f20640f52809a161549efb74a91ca3c768dbb3c3934d7898499e04bcb7da5f1 Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.754629 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-929sx"] Nov 23 07:06:45 crc kubenswrapper[4988]: W1123 07:06:45.763767 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e3460da_718e_4daf_b104_a3810d37f437.slice/crio-bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361 WatchSource:0}: Error finding container bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361: Status 404 returned error can't find the container with id bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361 Nov 23 07:06:45 crc kubenswrapper[4988]: I1123 07:06:45.826447 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.185657 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-2qd5t" podUID="0c14320c-7643-4a13-a602-480b33302bea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.376111 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9qhs" event={"ID":"bd5e9b0b-3032-4132-9dbe-d12bd89466f0","Type":"ContainerStarted","Data":"facd8c1229becea5d1702a120f64f08601bb811e403d35249c0df6437c57c708"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.384135 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerStarted","Data":"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.384191 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerStarted","Data":"0f20640f52809a161549efb74a91ca3c768dbb3c3934d7898499e04bcb7da5f1"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.386130 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-929sx" event={"ID":"0e3460da-718e-4daf-b104-a3810d37f437","Type":"ContainerStarted","Data":"17fdcf6c95d00108492d6dacd204a2b7d7c6e0b1bc2c771436f7b091577c80bb"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.386157 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-929sx" event={"ID":"0e3460da-718e-4daf-b104-a3810d37f437","Type":"ContainerStarted","Data":"bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.388158 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerStarted","Data":"52aed4cfc59f6a8265306578faf140117791e8010be7b8413384ddb112570573"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.389968 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerStarted","Data":"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d"} Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.405517 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j9qhs" podStartSLOduration=2.34308638 podStartE2EDuration="34.405496535s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="2025-11-23 07:06:13.907776554 +0000 UTC m=+1226.216289317" lastFinishedPulling="2025-11-23 07:06:45.970186709 +0000 UTC m=+1258.278699472" observedRunningTime="2025-11-23 07:06:46.399630885 +0000 UTC m=+1258.708143638" watchObservedRunningTime="2025-11-23 07:06:46.405496535 +0000 UTC m=+1258.714009298" Nov 23 07:06:46 crc kubenswrapper[4988]: I1123 07:06:46.417869 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-929sx" podStartSLOduration=12.417850579 podStartE2EDuration="12.417850579s" podCreationTimestamp="2025-11-23 07:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:46.416065357 +0000 UTC m=+1258.724578140" watchObservedRunningTime="2025-11-23 07:06:46.417850579 +0000 UTC m=+1258.726363352" Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.406477 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerStarted","Data":"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70"} Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.410255 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerStarted","Data":"b0af7e44f04e408944e7c1b2eef12ae141deb9974fd6480160b622ff9e9d9379"} Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.410285 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerStarted","Data":"b6e5acca2641d8e9a3eda7015fc4fdfb6bc3436733445b23cf69c1ae74733c20"} Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.421464 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerStarted","Data":"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042"} Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.437790 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.437328274 podStartE2EDuration="3.437328274s" podCreationTimestamp="2025-11-23 07:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:47.431616558 +0000 UTC m=+1259.740129321" watchObservedRunningTime="2025-11-23 07:06:47.437328274 +0000 UTC m=+1259.745841037" Nov 23 07:06:47 crc kubenswrapper[4988]: I1123 07:06:47.478144 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.478126526 podStartE2EDuration="14.478126526s" podCreationTimestamp="2025-11-23 07:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:47.469726336 +0000 UTC m=+1259.778239099" watchObservedRunningTime="2025-11-23 07:06:47.478126526 +0000 UTC m=+1259.786639299" Nov 23 07:06:50 crc kubenswrapper[4988]: I1123 07:06:50.463776 4988 generic.go:334] "Generic (PLEG): container finished" podID="0e3460da-718e-4daf-b104-a3810d37f437" containerID="17fdcf6c95d00108492d6dacd204a2b7d7c6e0b1bc2c771436f7b091577c80bb" exitCode=0 Nov 23 07:06:50 crc kubenswrapper[4988]: I1123 07:06:50.463827 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-929sx" event={"ID":"0e3460da-718e-4daf-b104-a3810d37f437","Type":"ContainerDied","Data":"17fdcf6c95d00108492d6dacd204a2b7d7c6e0b1bc2c771436f7b091577c80bb"} Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.486209 4988 generic.go:334] "Generic (PLEG): container finished" podID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" containerID="facd8c1229becea5d1702a120f64f08601bb811e403d35249c0df6437c57c708" exitCode=0 Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.486418 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9qhs" event={"ID":"bd5e9b0b-3032-4132-9dbe-d12bd89466f0","Type":"ContainerDied","Data":"facd8c1229becea5d1702a120f64f08601bb811e403d35249c0df6437c57c708"} Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.824958 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.965930 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.965979 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.966085 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4277j\" (UniqueName: \"kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.966104 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.966160 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.966219 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data\") pod \"0e3460da-718e-4daf-b104-a3810d37f437\" (UID: \"0e3460da-718e-4daf-b104-a3810d37f437\") " Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.971217 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.972119 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts" (OuterVolumeSpecName: "scripts") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.972698 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.973598 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j" (OuterVolumeSpecName: "kube-api-access-4277j") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "kube-api-access-4277j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.990434 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data" (OuterVolumeSpecName: "config-data") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:51 crc kubenswrapper[4988]: I1123 07:06:51.994762 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e3460da-718e-4daf-b104-a3810d37f437" (UID: "0e3460da-718e-4daf-b104-a3810d37f437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068255 4988 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068506 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068587 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4277j\" (UniqueName: \"kubernetes.io/projected/0e3460da-718e-4daf-b104-a3810d37f437-kube-api-access-4277j\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068699 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068776 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.068854 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e3460da-718e-4daf-b104-a3810d37f437-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.502985 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-929sx" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.517058 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-929sx" event={"ID":"0e3460da-718e-4daf-b104-a3810d37f437","Type":"ContainerDied","Data":"bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361"} Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.521517 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8bc31201a9c2fac194e00ea36e1420a4fdbfdafcd655ad16e17c327cc22361" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.521708 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerStarted","Data":"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401"} Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.583669 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:06:52 crc kubenswrapper[4988]: E1123 07:06:52.584119 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3460da-718e-4daf-b104-a3810d37f437" containerName="keystone-bootstrap" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.584141 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3460da-718e-4daf-b104-a3810d37f437" containerName="keystone-bootstrap" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.584373 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3460da-718e-4daf-b104-a3810d37f437" containerName="keystone-bootstrap" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.585018 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.587259 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.587603 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.587623 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7zkrc" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.587865 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.588650 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.589296 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.595603 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.780868 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.780922 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.780948 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.780982 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.781014 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.781086 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.781127 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdlj8\" (UniqueName: \"kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.781155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.866843 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.885689 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts\") pod \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.885736 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs\") pod \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.885893 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle\") pod \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.885962 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm2h4\" (UniqueName: \"kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4\") pod \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.886016 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data\") pod \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\" (UID: \"bd5e9b0b-3032-4132-9dbe-d12bd89466f0\") " Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.886414 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs" (OuterVolumeSpecName: "logs") pod "bd5e9b0b-3032-4132-9dbe-d12bd89466f0" (UID: "bd5e9b0b-3032-4132-9dbe-d12bd89466f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887028 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887290 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdlj8\" (UniqueName: \"kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887357 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887476 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887518 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887579 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887642 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.887757 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.890532 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts" (OuterVolumeSpecName: "scripts") pod "bd5e9b0b-3032-4132-9dbe-d12bd89466f0" (UID: "bd5e9b0b-3032-4132-9dbe-d12bd89466f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.891344 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4" (OuterVolumeSpecName: "kube-api-access-dm2h4") pod "bd5e9b0b-3032-4132-9dbe-d12bd89466f0" (UID: "bd5e9b0b-3032-4132-9dbe-d12bd89466f0"). InnerVolumeSpecName "kube-api-access-dm2h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.891663 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.893265 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.894702 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.894794 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.894819 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.896818 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.899913 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.908958 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdlj8\" (UniqueName: \"kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8\") pod \"keystone-5c99767b4c-cbdj7\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.933237 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data" (OuterVolumeSpecName: "config-data") pod "bd5e9b0b-3032-4132-9dbe-d12bd89466f0" (UID: "bd5e9b0b-3032-4132-9dbe-d12bd89466f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.933482 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd5e9b0b-3032-4132-9dbe-d12bd89466f0" (UID: "bd5e9b0b-3032-4132-9dbe-d12bd89466f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.989965 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.990228 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.990240 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm2h4\" (UniqueName: \"kubernetes.io/projected/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-kube-api-access-dm2h4\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:52 crc kubenswrapper[4988]: I1123 07:06:52.990248 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd5e9b0b-3032-4132-9dbe-d12bd89466f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.202749 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.520806 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j9qhs" event={"ID":"bd5e9b0b-3032-4132-9dbe-d12bd89466f0","Type":"ContainerDied","Data":"67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1"} Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.521007 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67112064fb888bd23ab7f534b6ae77f2678518e7c37e620a274fd843d9f04bd1" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.521057 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j9qhs" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.674753 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:06:53 crc kubenswrapper[4988]: E1123 07:06:53.675125 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" containerName="placement-db-sync" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.675140 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" containerName="placement-db-sync" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.675345 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" containerName="placement-db-sync" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.676438 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.678760 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.679010 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.679078 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-b492k" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.679115 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.679171 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.685986 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.721286 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:06:53 crc kubenswrapper[4988]: W1123 07:06:53.796486 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bde2362_ff90_47d5_845c_8dfcfe826a61.slice/crio-86fc2d826b7e686af0952775c9edb0d5758fcf28580139cc5ab2fea25f2186a8 WatchSource:0}: Error finding container 86fc2d826b7e686af0952775c9edb0d5758fcf28580139cc5ab2fea25f2186a8: Status 404 returned error can't find the container with id 86fc2d826b7e686af0952775c9edb0d5758fcf28580139cc5ab2fea25f2186a8 Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811556 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811615 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811674 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqsbd\" (UniqueName: \"kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811710 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811731 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811773 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.811795 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.914853 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.914904 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.914947 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.914982 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.915014 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.915048 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.915086 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqsbd\" (UniqueName: \"kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.919129 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.919287 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.919398 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.921723 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.922008 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.931671 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqsbd\" (UniqueName: \"kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.932174 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs\") pod \"placement-55cfdd5f8d-kn94x\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.936493 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.936609 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.983173 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:06:53 crc kubenswrapper[4988]: I1123 07:06:53.988139 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.005782 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.483235 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:06:54 crc kubenswrapper[4988]: W1123 07:06:54.486786 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b6f8f28_b3df_4d34_a898_74f4dc12f201.slice/crio-c798f28a4047f85fffb38a1e4ed84f413eccd05375411a4daf12143fced14079 WatchSource:0}: Error finding container c798f28a4047f85fffb38a1e4ed84f413eccd05375411a4daf12143fced14079: Status 404 returned error can't find the container with id c798f28a4047f85fffb38a1e4ed84f413eccd05375411a4daf12143fced14079 Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.552112 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c99767b4c-cbdj7" event={"ID":"7bde2362-ff90-47d5-845c-8dfcfe826a61","Type":"ContainerStarted","Data":"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95"} Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.552177 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c99767b4c-cbdj7" event={"ID":"7bde2362-ff90-47d5-845c-8dfcfe826a61","Type":"ContainerStarted","Data":"86fc2d826b7e686af0952775c9edb0d5758fcf28580139cc5ab2fea25f2186a8"} Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.552294 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.577425 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerStarted","Data":"c798f28a4047f85fffb38a1e4ed84f413eccd05375411a4daf12143fced14079"} Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.577489 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.577505 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.590737 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c99767b4c-cbdj7" podStartSLOduration=2.590718355 podStartE2EDuration="2.590718355s" podCreationTimestamp="2025-11-23 07:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:54.587418737 +0000 UTC m=+1266.895931500" watchObservedRunningTime="2025-11-23 07:06:54.590718355 +0000 UTC m=+1266.899231118" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.790548 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.790591 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.835904 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:54 crc kubenswrapper[4988]: I1123 07:06:54.842824 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:55 crc kubenswrapper[4988]: I1123 07:06:55.585772 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:55 crc kubenswrapper[4988]: I1123 07:06:55.586181 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:56 crc kubenswrapper[4988]: I1123 07:06:56.437698 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:06:56 crc kubenswrapper[4988]: I1123 07:06:56.446549 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:06:56 crc kubenswrapper[4988]: I1123 07:06:56.599459 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerStarted","Data":"db7496dcd1faf49f05c99168f0af122bee0300bc60d1e286d3f55f6eb98a7498"} Nov 23 07:06:57 crc kubenswrapper[4988]: I1123 07:06:57.625547 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:06:57 crc kubenswrapper[4988]: I1123 07:06:57.625999 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:06:57 crc kubenswrapper[4988]: I1123 07:06:57.646002 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:57 crc kubenswrapper[4988]: I1123 07:06:57.650167 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:06:59 crc kubenswrapper[4988]: I1123 07:06:59.646856 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerStarted","Data":"c3aa642238bf2e182f6aa8b168ebe96dd9671c5155884f7a97375c632ebe4f02"} Nov 23 07:06:59 crc kubenswrapper[4988]: I1123 07:06:59.688691 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55cfdd5f8d-kn94x" podStartSLOduration=6.688672624 podStartE2EDuration="6.688672624s" podCreationTimestamp="2025-11-23 07:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:06:59.679547486 +0000 UTC m=+1271.988060279" watchObservedRunningTime="2025-11-23 07:06:59.688672624 +0000 UTC m=+1271.997185397" Nov 23 07:07:00 crc kubenswrapper[4988]: I1123 07:07:00.662568 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:07:00 crc kubenswrapper[4988]: I1123 07:07:00.662940 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:07:01 crc kubenswrapper[4988]: I1123 07:07:01.783464 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.722694 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-26h6n" event={"ID":"5f2fc5f1-4980-49de-8ff7-981bb9f4966c","Type":"ContainerStarted","Data":"43836a955a8fe66876f7b7225a37522438dc7969421767e3ebb576fee4896ce2"} Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.726917 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerStarted","Data":"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87"} Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.727018 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-central-agent" containerID="cri-o://af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d" gracePeriod=30 Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.727015 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="sg-core" containerID="cri-o://f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401" gracePeriod=30 Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.727015 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-notification-agent" containerID="cri-o://092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042" gracePeriod=30 Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.727086 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="proxy-httpd" containerID="cri-o://9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87" gracePeriod=30 Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.727395 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.730369 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mmc7" event={"ID":"95b8327c-fa6e-40b7-984e-c819d78da49b","Type":"ContainerStarted","Data":"d4c96fa15b5d2aa8b6e8be74287805d1cd138dd10074569c35fd38198b992b42"} Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.747855 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-26h6n" podStartSLOduration=2.951936897 podStartE2EDuration="52.747836669s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="2025-11-23 07:06:14.119812023 +0000 UTC m=+1226.428324786" lastFinishedPulling="2025-11-23 07:07:03.915711795 +0000 UTC m=+1276.224224558" observedRunningTime="2025-11-23 07:07:04.743586028 +0000 UTC m=+1277.052098791" watchObservedRunningTime="2025-11-23 07:07:04.747836669 +0000 UTC m=+1277.056349432" Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.777990 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.787777797 podStartE2EDuration="52.777971657s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="2025-11-23 07:06:13.92659204 +0000 UTC m=+1226.235104803" lastFinishedPulling="2025-11-23 07:07:03.9167859 +0000 UTC m=+1276.225298663" observedRunningTime="2025-11-23 07:07:04.771319028 +0000 UTC m=+1277.079831821" watchObservedRunningTime="2025-11-23 07:07:04.777971657 +0000 UTC m=+1277.086484420" Nov 23 07:07:04 crc kubenswrapper[4988]: I1123 07:07:04.799312 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4mmc7" podStartSLOduration=2.267350638 podStartE2EDuration="52.799289614s" podCreationTimestamp="2025-11-23 07:06:12 +0000 UTC" firstStartedPulling="2025-11-23 07:06:13.385431928 +0000 UTC m=+1225.693944691" lastFinishedPulling="2025-11-23 07:07:03.917370864 +0000 UTC m=+1276.225883667" observedRunningTime="2025-11-23 07:07:04.794151332 +0000 UTC m=+1277.102664085" watchObservedRunningTime="2025-11-23 07:07:04.799289614 +0000 UTC m=+1277.107802377" Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.749929 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerID="9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87" exitCode=0 Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.749966 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerID="f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401" exitCode=2 Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.749977 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerID="af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d" exitCode=0 Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.750002 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerDied","Data":"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87"} Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.750031 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerDied","Data":"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401"} Nov 23 07:07:05 crc kubenswrapper[4988]: I1123 07:07:05.750044 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerDied","Data":"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d"} Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.470227 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.602027 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.602673 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.602870 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603021 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603048 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603331 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7stg\" (UniqueName: \"kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603501 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603702 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.603912 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle\") pod \"f0cb22d8-be3d-440b-9316-3406be73f68b\" (UID: \"f0cb22d8-be3d-440b-9316-3406be73f68b\") " Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.605081 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.605249 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f0cb22d8-be3d-440b-9316-3406be73f68b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.609951 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts" (OuterVolumeSpecName: "scripts") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.610029 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg" (OuterVolumeSpecName: "kube-api-access-v7stg") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "kube-api-access-v7stg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.629009 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.692701 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.706965 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.707001 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7stg\" (UniqueName: \"kubernetes.io/projected/f0cb22d8-be3d-440b-9316-3406be73f68b-kube-api-access-v7stg\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.707013 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.707021 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.708373 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data" (OuterVolumeSpecName: "config-data") pod "f0cb22d8-be3d-440b-9316-3406be73f68b" (UID: "f0cb22d8-be3d-440b-9316-3406be73f68b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.775826 4988 generic.go:334] "Generic (PLEG): container finished" podID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" containerID="43836a955a8fe66876f7b7225a37522438dc7969421767e3ebb576fee4896ce2" exitCode=0 Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.775900 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-26h6n" event={"ID":"5f2fc5f1-4980-49de-8ff7-981bb9f4966c","Type":"ContainerDied","Data":"43836a955a8fe66876f7b7225a37522438dc7969421767e3ebb576fee4896ce2"} Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.777839 4988 generic.go:334] "Generic (PLEG): container finished" podID="8492667f-e261-4214-8c00-d2271167976e" containerID="04443b6b592714f8daae5298a37741922db73aa1d1ed82ef92859ab5f2626a5d" exitCode=0 Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.777891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mzqm8" event={"ID":"8492667f-e261-4214-8c00-d2271167976e","Type":"ContainerDied","Data":"04443b6b592714f8daae5298a37741922db73aa1d1ed82ef92859ab5f2626a5d"} Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.784915 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerID="092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042" exitCode=0 Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.784980 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerDied","Data":"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042"} Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.785030 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f0cb22d8-be3d-440b-9316-3406be73f68b","Type":"ContainerDied","Data":"760e76a77636fc038d2d34d45e799e954b047ced4e91e57c263e3380f09ce635"} Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.785062 4988 scope.go:117] "RemoveContainer" containerID="9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.785293 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.808339 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0cb22d8-be3d-440b-9316-3406be73f68b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.825803 4988 scope.go:117] "RemoveContainer" containerID="f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.843165 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.852397 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.854144 4988 scope.go:117] "RemoveContainer" containerID="092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.865487 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.865879 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-notification-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.865899 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-notification-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.865930 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="proxy-httpd" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.865939 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="proxy-httpd" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.865956 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-central-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.865964 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-central-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.865985 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="sg-core" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.865994 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="sg-core" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.866235 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="sg-core" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.866251 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="proxy-httpd" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.866260 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-notification-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.866277 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" containerName="ceilometer-central-agent" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.868136 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.871688 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.872006 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.874719 4988 scope.go:117] "RemoveContainer" containerID="af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.876314 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.902780 4988 scope.go:117] "RemoveContainer" containerID="9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.903313 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87\": container with ID starting with 9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87 not found: ID does not exist" containerID="9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.903368 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87"} err="failed to get container status \"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87\": rpc error: code = NotFound desc = could not find container \"9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87\": container with ID starting with 9be6df1e4f900001e67413326777df1dfc3a9445e5a91fea896a37ce0937cc87 not found: ID does not exist" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.903399 4988 scope.go:117] "RemoveContainer" containerID="f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.903891 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401\": container with ID starting with f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401 not found: ID does not exist" containerID="f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.903947 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401"} err="failed to get container status \"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401\": rpc error: code = NotFound desc = could not find container \"f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401\": container with ID starting with f3311657e0e42ebd8e158274f77bbde47e6370f72d9af4ec98d4b1188bd0c401 not found: ID does not exist" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.903981 4988 scope.go:117] "RemoveContainer" containerID="092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.904337 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042\": container with ID starting with 092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042 not found: ID does not exist" containerID="092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.904376 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042"} err="failed to get container status \"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042\": rpc error: code = NotFound desc = could not find container \"092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042\": container with ID starting with 092e0b12d4b20c8f207e7508960f1933fa166790d1fa79695661b4790d2c8042 not found: ID does not exist" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.904397 4988 scope.go:117] "RemoveContainer" containerID="af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d" Nov 23 07:07:08 crc kubenswrapper[4988]: E1123 07:07:08.905324 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d\": container with ID starting with af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d not found: ID does not exist" containerID="af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d" Nov 23 07:07:08 crc kubenswrapper[4988]: I1123 07:07:08.905374 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d"} err="failed to get container status \"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d\": rpc error: code = NotFound desc = could not find container \"af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d\": container with ID starting with af094ab787a195d1a57c4f35cba104ae6b76fb6900f822a270326a942682276d not found: ID does not exist" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010181 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010308 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010389 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010449 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgt2\" (UniqueName: \"kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010576 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.010726 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.112254 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.112343 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cgt2\" (UniqueName: \"kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.112483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.112519 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.112594 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.113145 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.113335 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.113407 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.113591 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.116166 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.116972 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.117641 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.117978 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.139699 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cgt2\" (UniqueName: \"kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2\") pod \"ceilometer-0\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.194477 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.619366 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:09 crc kubenswrapper[4988]: W1123 07:07:09.624927 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fdd6baa_b0e9_4a41_a13d_9b7f15e14e9f.slice/crio-67286e64b93672ad61c0ce1537ac5bb65b23a7372dc1abbac31d66134393bf4c WatchSource:0}: Error finding container 67286e64b93672ad61c0ce1537ac5bb65b23a7372dc1abbac31d66134393bf4c: Status 404 returned error can't find the container with id 67286e64b93672ad61c0ce1537ac5bb65b23a7372dc1abbac31d66134393bf4c Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.797597 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerStarted","Data":"67286e64b93672ad61c0ce1537ac5bb65b23a7372dc1abbac31d66134393bf4c"} Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.799393 4988 generic.go:334] "Generic (PLEG): container finished" podID="95b8327c-fa6e-40b7-984e-c819d78da49b" containerID="d4c96fa15b5d2aa8b6e8be74287805d1cd138dd10074569c35fd38198b992b42" exitCode=0 Nov 23 07:07:09 crc kubenswrapper[4988]: I1123 07:07:09.799501 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mmc7" event={"ID":"95b8327c-fa6e-40b7-984e-c819d78da49b","Type":"ContainerDied","Data":"d4c96fa15b5d2aa8b6e8be74287805d1cd138dd10074569c35fd38198b992b42"} Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.116019 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-26h6n" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.126492 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241011 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f555\" (UniqueName: \"kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555\") pod \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241139 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle\") pod \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241313 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbpt\" (UniqueName: \"kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt\") pod \"8492667f-e261-4214-8c00-d2271167976e\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241388 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle\") pod \"8492667f-e261-4214-8c00-d2271167976e\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241481 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data\") pod \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\" (UID: \"5f2fc5f1-4980-49de-8ff7-981bb9f4966c\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.241541 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config\") pod \"8492667f-e261-4214-8c00-d2271167976e\" (UID: \"8492667f-e261-4214-8c00-d2271167976e\") " Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.245950 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555" (OuterVolumeSpecName: "kube-api-access-6f555") pod "5f2fc5f1-4980-49de-8ff7-981bb9f4966c" (UID: "5f2fc5f1-4980-49de-8ff7-981bb9f4966c"). InnerVolumeSpecName "kube-api-access-6f555". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.247143 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5f2fc5f1-4980-49de-8ff7-981bb9f4966c" (UID: "5f2fc5f1-4980-49de-8ff7-981bb9f4966c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.255512 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt" (OuterVolumeSpecName: "kube-api-access-hrbpt") pod "8492667f-e261-4214-8c00-d2271167976e" (UID: "8492667f-e261-4214-8c00-d2271167976e"). InnerVolumeSpecName "kube-api-access-hrbpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.264491 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f2fc5f1-4980-49de-8ff7-981bb9f4966c" (UID: "5f2fc5f1-4980-49de-8ff7-981bb9f4966c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.264876 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8492667f-e261-4214-8c00-d2271167976e" (UID: "8492667f-e261-4214-8c00-d2271167976e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.284353 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config" (OuterVolumeSpecName: "config") pod "8492667f-e261-4214-8c00-d2271167976e" (UID: "8492667f-e261-4214-8c00-d2271167976e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344513 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f555\" (UniqueName: \"kubernetes.io/projected/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-kube-api-access-6f555\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344558 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344570 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrbpt\" (UniqueName: \"kubernetes.io/projected/8492667f-e261-4214-8c00-d2271167976e-kube-api-access-hrbpt\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344581 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344591 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5f2fc5f1-4980-49de-8ff7-981bb9f4966c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.344603 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8492667f-e261-4214-8c00-d2271167976e-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.512939 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0cb22d8-be3d-440b-9316-3406be73f68b" path="/var/lib/kubelet/pods/f0cb22d8-be3d-440b-9316-3406be73f68b/volumes" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.810373 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mzqm8" event={"ID":"8492667f-e261-4214-8c00-d2271167976e","Type":"ContainerDied","Data":"98851ec32f8e01ba9e0357b0cbdee27f29bde1b96a8876c4a5e06a6cfbfbcf89"} Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.810421 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98851ec32f8e01ba9e0357b0cbdee27f29bde1b96a8876c4a5e06a6cfbfbcf89" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.810419 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mzqm8" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.812472 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerStarted","Data":"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e"} Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.814831 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-26h6n" event={"ID":"5f2fc5f1-4980-49de-8ff7-981bb9f4966c","Type":"ContainerDied","Data":"dea3886697a3e8e2f2be2af2c37d3343f1d7457a42afca8f50b96ca42e3c42d3"} Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.814902 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea3886697a3e8e2f2be2af2c37d3343f1d7457a42afca8f50b96ca42e3c42d3" Nov 23 07:07:10 crc kubenswrapper[4988]: I1123 07:07:10.814984 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-26h6n" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.055725 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-59frv"] Nov 23 07:07:11 crc kubenswrapper[4988]: E1123 07:07:11.056097 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" containerName="barbican-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.056113 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" containerName="barbican-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: E1123 07:07:11.056129 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8492667f-e261-4214-8c00-d2271167976e" containerName="neutron-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.056137 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8492667f-e261-4214-8c00-d2271167976e" containerName="neutron-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.097688 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8492667f-e261-4214-8c00-d2271167976e" containerName="neutron-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.097723 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" containerName="barbican-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.098678 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.100975 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-59frv"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.151004 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.163716 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.170881 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-v999q" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.171085 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.171185 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.224713 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.252235 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.265136 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.268885 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.276787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.276830 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.276882 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.276944 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.276966 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277017 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bzk\" (UniqueName: \"kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277044 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7zvq\" (UniqueName: \"kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277075 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277093 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277135 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.277151 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.296326 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.344270 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.345944 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.352649 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9wlpm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.354785 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-59frv"] Nov 23 07:07:11 crc kubenswrapper[4988]: E1123 07:07:11.355595 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-f2bzk ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6c654c9745-59frv" podUID="ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.356328 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.356590 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.356816 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.367381 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.376208 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.378025 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389103 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389156 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7zvq\" (UniqueName: \"kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389208 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389231 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389253 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389269 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgbc\" (UniqueName: \"kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389324 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389347 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389368 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389386 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389419 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389435 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389461 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389481 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389496 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.389538 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bzk\" (UniqueName: \"kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.390769 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.391365 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.391790 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.392634 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.393719 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.394302 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.396154 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.402245 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.412318 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.419840 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.433449 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bzk\" (UniqueName: \"kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk\") pod \"dnsmasq-dns-6c654c9745-59frv\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.445164 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.456916 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7zvq\" (UniqueName: \"kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq\") pod \"barbican-keystone-listener-7bf555f794-8vm7k\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.492993 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493095 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7x4z\" (UniqueName: \"kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493137 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493185 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493276 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493319 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data\") pod \"95b8327c-fa6e-40b7-984e-c819d78da49b\" (UID: \"95b8327c-fa6e-40b7-984e-c819d78da49b\") " Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493515 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493559 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493594 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493615 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493644 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fgbc\" (UniqueName: \"kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493680 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493698 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2p2\" (UniqueName: \"kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493715 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493745 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493776 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493799 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493818 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493839 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493907 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkmb\" (UniqueName: \"kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.493950 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.503697 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.504643 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.505601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z" (OuterVolumeSpecName: "kube-api-access-g7x4z") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "kube-api-access-g7x4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.507893 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:11 crc kubenswrapper[4988]: E1123 07:07:11.510180 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" containerName="cinder-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.510352 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" containerName="cinder-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.510601 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" containerName="cinder-db-sync" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.512739 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.513788 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.514702 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.517500 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.529154 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fgbc\" (UniqueName: \"kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.526317 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.532075 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.543848 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom\") pod \"barbican-worker-7888d7fbb9-cqj2f\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.551593 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts" (OuterVolumeSpecName: "scripts") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.567683 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.577704 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595146 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595247 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595277 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhwz\" (UniqueName: \"kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595304 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595346 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595382 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595406 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595432 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595453 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2p2\" (UniqueName: \"kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595511 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595549 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595579 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595608 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595661 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595687 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fkmb\" (UniqueName: \"kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595744 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595818 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595834 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7x4z\" (UniqueName: \"kubernetes.io/projected/95b8327c-fa6e-40b7-984e-c819d78da49b-kube-api-access-g7x4z\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595847 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595860 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.595871 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95b8327c-fa6e-40b7-984e-c819d78da49b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.606742 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.609072 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.610149 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.612088 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.612582 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.612606 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.614470 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.614941 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.616298 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.618797 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fkmb\" (UniqueName: \"kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb\") pod \"neutron-97bc9b8d4-j6jbm\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.619853 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2p2\" (UniqueName: \"kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2\") pod \"dnsmasq-dns-5cc67f459c-dnsbr\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.645368 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data" (OuterVolumeSpecName: "config-data") pod "95b8327c-fa6e-40b7-984e-c819d78da49b" (UID: "95b8327c-fa6e-40b7-984e-c819d78da49b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.668992 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.699757 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhwz\" (UniqueName: \"kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.699813 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.699870 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.699907 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.700012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.702651 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.704869 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95b8327c-fa6e-40b7-984e-c819d78da49b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.708054 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.717360 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.727777 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhwz\" (UniqueName: \"kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.742008 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom\") pod \"barbican-api-67cf57977d-cbhzp\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.858046 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.877929 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerStarted","Data":"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc"} Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.890303 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.892622 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4mmc7" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.893173 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.894070 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4mmc7" event={"ID":"95b8327c-fa6e-40b7-984e-c819d78da49b","Type":"ContainerDied","Data":"6f3c79b0b29bb011d77ece8cdf88d600ed04ba1a9d9b8029f9e1363ef1d66f6b"} Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.894133 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3c79b0b29bb011d77ece8cdf88d600ed04ba1a9d9b8029f9e1363ef1d66f6b" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.895950 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:11 crc kubenswrapper[4988]: I1123 07:07:11.921710 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015426 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2bzk\" (UniqueName: \"kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015495 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015567 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015611 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015667 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.015694 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config\") pod \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\" (UID: \"ce88b31c-585f-45ce-88aa-8ebecb6fbf5a\") " Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.016776 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config" (OuterVolumeSpecName: "config") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.017086 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.017393 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.019409 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.023318 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.024666 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk" (OuterVolumeSpecName: "kube-api-access-f2bzk") pod "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" (UID: "ce88b31c-585f-45ce-88aa-8ebecb6fbf5a"). InnerVolumeSpecName "kube-api-access-f2bzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.085730 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.092930 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.106692 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.106997 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.107261 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.107566 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p7zwl" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118231 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118466 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2j7\" (UniqueName: \"kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118569 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118666 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.118896 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119002 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2bzk\" (UniqueName: \"kubernetes.io/projected/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-kube-api-access-f2bzk\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119061 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119115 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119173 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119246 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.119316 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.124704 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.160281 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.218517 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.221176 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222100 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2j7\" (UniqueName: \"kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222533 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222604 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.222761 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.261621 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.266616 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.274453 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.277070 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.281237 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.284291 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.291985 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.301054 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.305365 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2j7\" (UniqueName: \"kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7\") pod \"cinder-scheduler-0\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324722 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324765 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wvc\" (UniqueName: \"kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324847 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324890 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.324913 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.326979 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.329567 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.335735 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.344530 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426112 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426163 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426245 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426269 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426308 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426339 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426368 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426398 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426418 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426477 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426529 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426563 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5wvc\" (UniqueName: \"kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.426629 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkl8\" (UniqueName: \"kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.428178 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.429052 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.429727 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.429920 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.430500 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.448630 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.492264 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5wvc\" (UniqueName: \"kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc\") pod \"dnsmasq-dns-797bbc649-gtsjs\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.537901 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkl8\" (UniqueName: \"kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.537967 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.537996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.538033 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.538069 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.538101 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.538162 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.545989 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.550140 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.558247 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.558375 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.564034 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.567240 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.602569 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkl8\" (UniqueName: \"kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8\") pod \"cinder-api-0\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.604705 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.683356 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.917310 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerStarted","Data":"27c13e65b896d9912c2a429f95509a8b45f5f1bf4cdadbd39cc61f88e3b8c6b0"} Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.920231 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerStarted","Data":"46855b109dd7a5bcb9825ab463a7ff752920ec20b6f706f3f5e06beb02a61ed3"} Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.935104 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c654c9745-59frv" Nov 23 07:07:12 crc kubenswrapper[4988]: I1123 07:07:12.935978 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerStarted","Data":"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.005244 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-59frv"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.022547 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c654c9745-59frv"] Nov 23 07:07:13 crc kubenswrapper[4988]: W1123 07:07:13.082616 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7abd5c6_9353_4af5_bcc3_9d6e66517978.slice/crio-e3abb95e97ed6a83213651f73d720d5348771aa07f4e11cf90b4ee1a7d84cb4c WatchSource:0}: Error finding container e3abb95e97ed6a83213651f73d720d5348771aa07f4e11cf90b4ee1a7d84cb4c: Status 404 returned error can't find the container with id e3abb95e97ed6a83213651f73d720d5348771aa07f4e11cf90b4ee1a7d84cb4c Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.094232 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.107527 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.223218 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.323998 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:13 crc kubenswrapper[4988]: W1123 07:07:13.339931 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b1718ea_f44e_41f4_9229_80af33e66280.slice/crio-920b4b2d8d48c9b2ce0ca37643b9c3859e14f76690d5ddd7a299371336bf9930 WatchSource:0}: Error finding container 920b4b2d8d48c9b2ce0ca37643b9c3859e14f76690d5ddd7a299371336bf9930: Status 404 returned error can't find the container with id 920b4b2d8d48c9b2ce0ca37643b9c3859e14f76690d5ddd7a299371336bf9930 Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.475792 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.488269 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.987597 4988 generic.go:334] "Generic (PLEG): container finished" podID="13617239-a2fa-4135-a583-b864d2c41dbf" containerID="6b444bd05baa7b48c7c1834ce300092ea9ce93df0610954498793d2f393f9260" exitCode=0 Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.988508 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" event={"ID":"13617239-a2fa-4135-a583-b864d2c41dbf","Type":"ContainerDied","Data":"6b444bd05baa7b48c7c1834ce300092ea9ce93df0610954498793d2f393f9260"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.988567 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" event={"ID":"13617239-a2fa-4135-a583-b864d2c41dbf","Type":"ContainerStarted","Data":"ba4223dd0235670b4860c9e583881e31d61ed382c9fdd14c0344c7d8553a0e71"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.995539 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerStarted","Data":"a9147a590705da748bcca443317184e83e47a36d614d35acb8aa413769488875"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.995598 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerStarted","Data":"7d3786cec333827d2dddbf2c60e55adbe174eca66ef6b70a381f513a20d5ec0a"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.997062 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" event={"ID":"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be","Type":"ContainerStarted","Data":"5482d01fd0ee1e1b934f2c596dca9f2d79b97dfaff5e254466b76ce23860c8d5"} Nov 23 07:07:13 crc kubenswrapper[4988]: I1123 07:07:13.998801 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerStarted","Data":"920b4b2d8d48c9b2ce0ca37643b9c3859e14f76690d5ddd7a299371336bf9930"} Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.015036 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerStarted","Data":"84855dbb74b676ed94de2f2ad4b575dd9058d91aa6c4efcfc3d5657bbb17aa8c"} Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.015378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerStarted","Data":"e3abb95e97ed6a83213651f73d720d5348771aa07f4e11cf90b4ee1a7d84cb4c"} Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.019531 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerStarted","Data":"14eee15dd2c6a39a9464a75b0e14e527175245b62811af9b00a9306b9bd9de0f"} Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.518606 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce88b31c-585f-45ce-88aa-8ebecb6fbf5a" path="/var/lib/kubelet/pods/ce88b31c-585f-45ce-88aa-8ebecb6fbf5a/volumes" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.724507 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811500 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811588 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811649 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2p2\" (UniqueName: \"kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811677 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811730 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.811829 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb\") pod \"13617239-a2fa-4135-a583-b864d2c41dbf\" (UID: \"13617239-a2fa-4135-a583-b864d2c41dbf\") " Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.827344 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2" (OuterVolumeSpecName: "kube-api-access-zg2p2") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "kube-api-access-zg2p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.847348 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config" (OuterVolumeSpecName: "config") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.847949 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.863799 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.886721 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.888692 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13617239-a2fa-4135-a583-b864d2c41dbf" (UID: "13617239-a2fa-4135-a583-b864d2c41dbf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913345 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913383 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2p2\" (UniqueName: \"kubernetes.io/projected/13617239-a2fa-4135-a583-b864d2c41dbf-kube-api-access-zg2p2\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913397 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913407 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913416 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:14 crc kubenswrapper[4988]: I1123 07:07:14.913424 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13617239-a2fa-4135-a583-b864d2c41dbf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.030750 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerStarted","Data":"268fb0bd0739fa78106d36525c18afa10cb4d516d25d1869d4a502190f90012b"} Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.030828 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.030843 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.034666 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.034678 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-dnsbr" event={"ID":"13617239-a2fa-4135-a583-b864d2c41dbf","Type":"ContainerDied","Data":"ba4223dd0235670b4860c9e583881e31d61ed382c9fdd14c0344c7d8553a0e71"} Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.034729 4988 scope.go:117] "RemoveContainer" containerID="6b444bd05baa7b48c7c1834ce300092ea9ce93df0610954498793d2f393f9260" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.041640 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerStarted","Data":"c2342eb37abcdebf788ea07b47f06fb4035f9c416f9399acc7b408319740da2d"} Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.042670 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.044924 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerID="24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7" exitCode=0 Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.044967 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" event={"ID":"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be","Type":"ContainerDied","Data":"24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7"} Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.051885 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-67cf57977d-cbhzp" podStartSLOduration=4.051866121 podStartE2EDuration="4.051866121s" podCreationTimestamp="2025-11-23 07:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:15.050281323 +0000 UTC m=+1287.358794106" watchObservedRunningTime="2025-11-23 07:07:15.051866121 +0000 UTC m=+1287.360378884" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.065634 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerStarted","Data":"59b045f5ced0552d1b892cd62da608c62c1928d40258a314e61cdfc8af89b0fe"} Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.088355 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-97bc9b8d4-j6jbm" podStartSLOduration=4.088331979 podStartE2EDuration="4.088331979s" podCreationTimestamp="2025-11-23 07:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:15.07409433 +0000 UTC m=+1287.382607113" watchObservedRunningTime="2025-11-23 07:07:15.088331979 +0000 UTC m=+1287.396844752" Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.160367 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.168234 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-dnsbr"] Nov 23 07:07:15 crc kubenswrapper[4988]: I1123 07:07:15.482488 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:16 crc kubenswrapper[4988]: I1123 07:07:16.507079 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13617239-a2fa-4135-a583-b864d2c41dbf" path="/var/lib/kubelet/pods/13617239-a2fa-4135-a583-b864d2c41dbf/volumes" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.661684 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:07:17 crc kubenswrapper[4988]: E1123 07:07:17.662150 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13617239-a2fa-4135-a583-b864d2c41dbf" containerName="init" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.662167 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="13617239-a2fa-4135-a583-b864d2c41dbf" containerName="init" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.662414 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="13617239-a2fa-4135-a583-b864d2c41dbf" containerName="init" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.663749 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.667405 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.667588 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.712984 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.795773 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.795849 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.795875 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpkt\" (UniqueName: \"kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.795899 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.795919 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.796002 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.796057 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898068 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898218 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898250 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwpkt\" (UniqueName: \"kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898277 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898303 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.898367 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.904256 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.904749 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.904990 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.905792 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.906903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.921122 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.922915 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwpkt\" (UniqueName: \"kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt\") pod \"neutron-68dbd6466f-n6f5g\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:17 crc kubenswrapper[4988]: I1123 07:07:17.990715 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:18 crc kubenswrapper[4988]: W1123 07:07:18.705095 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod873f95e0_7013_479e_b8b1_d3cf948d24fe.slice/crio-8e9bd4fcecf6c442f4380daaffd5536973cf90cef33a2a9d14139a5be44e65c9 WatchSource:0}: Error finding container 8e9bd4fcecf6c442f4380daaffd5536973cf90cef33a2a9d14139a5be44e65c9: Status 404 returned error can't find the container with id 8e9bd4fcecf6c442f4380daaffd5536973cf90cef33a2a9d14139a5be44e65c9 Nov 23 07:07:18 crc kubenswrapper[4988]: I1123 07:07:18.711585 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.141150 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerStarted","Data":"3adeffdd5d441664c518d04f180da9de8589fc93a41633d4cc3e6535f1bc0de0"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.141534 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.141415 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api-log" containerID="cri-o://59b045f5ced0552d1b892cd62da608c62c1928d40258a314e61cdfc8af89b0fe" gracePeriod=30 Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.141634 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api" containerID="cri-o://3adeffdd5d441664c518d04f180da9de8589fc93a41633d4cc3e6535f1bc0de0" gracePeriod=30 Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.152853 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerStarted","Data":"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.154496 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerStarted","Data":"8d370a258077eef29df07553ebb57bc3f0df94518539e125a0c3eaef83ef1b5b"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.154542 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerStarted","Data":"c20e80b908053c9ad38b943cfa24ecc2c59c1063094c7728511419afd22791ce"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.161368 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerStarted","Data":"d1dbf13c4c51f91d80504e9813025e575415f2d87f015a192bbdff65a11f6ae1"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.161419 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerStarted","Data":"398b87a138cec8f732064d3f7cb513a557fa9c4ba1deb887d5a4585196f85d30"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.163225 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerStarted","Data":"8e9bd4fcecf6c442f4380daaffd5536973cf90cef33a2a9d14139a5be44e65c9"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.175083 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.17506164 podStartE2EDuration="7.17506164s" podCreationTimestamp="2025-11-23 07:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:19.165491102 +0000 UTC m=+1291.474003875" watchObservedRunningTime="2025-11-23 07:07:19.17506164 +0000 UTC m=+1291.483574403" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.177520 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerStarted","Data":"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.177830 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.180546 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" event={"ID":"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be","Type":"ContainerStarted","Data":"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa"} Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.181285 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.222919 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" podStartSLOduration=2.626355349 podStartE2EDuration="8.222897669s" podCreationTimestamp="2025-11-23 07:07:11 +0000 UTC" firstStartedPulling="2025-11-23 07:07:12.312564765 +0000 UTC m=+1284.621077528" lastFinishedPulling="2025-11-23 07:07:17.909107095 +0000 UTC m=+1290.217619848" observedRunningTime="2025-11-23 07:07:19.217793257 +0000 UTC m=+1291.526306030" watchObservedRunningTime="2025-11-23 07:07:19.222897669 +0000 UTC m=+1291.531410432" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.227829 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" podStartSLOduration=2.7432287520000003 podStartE2EDuration="8.227810156s" podCreationTimestamp="2025-11-23 07:07:11 +0000 UTC" firstStartedPulling="2025-11-23 07:07:12.423393334 +0000 UTC m=+1284.731906097" lastFinishedPulling="2025-11-23 07:07:17.907974738 +0000 UTC m=+1290.216487501" observedRunningTime="2025-11-23 07:07:19.198448857 +0000 UTC m=+1291.506961640" watchObservedRunningTime="2025-11-23 07:07:19.227810156 +0000 UTC m=+1291.536322919" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.242770 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" podStartSLOduration=7.242747821 podStartE2EDuration="7.242747821s" podCreationTimestamp="2025-11-23 07:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:19.241763108 +0000 UTC m=+1291.550275891" watchObservedRunningTime="2025-11-23 07:07:19.242747821 +0000 UTC m=+1291.551260584" Nov 23 07:07:19 crc kubenswrapper[4988]: I1123 07:07:19.281486 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.990811303 podStartE2EDuration="11.281434963s" podCreationTimestamp="2025-11-23 07:07:08 +0000 UTC" firstStartedPulling="2025-11-23 07:07:09.626989868 +0000 UTC m=+1281.935502631" lastFinishedPulling="2025-11-23 07:07:17.917613528 +0000 UTC m=+1290.226126291" observedRunningTime="2025-11-23 07:07:19.267646354 +0000 UTC m=+1291.576159127" watchObservedRunningTime="2025-11-23 07:07:19.281434963 +0000 UTC m=+1291.589947726" Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.199835 4988 generic.go:334] "Generic (PLEG): container finished" podID="2b1718ea-f44e-41f4-9229-80af33e66280" containerID="59b045f5ced0552d1b892cd62da608c62c1928d40258a314e61cdfc8af89b0fe" exitCode=143 Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.199913 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerDied","Data":"59b045f5ced0552d1b892cd62da608c62c1928d40258a314e61cdfc8af89b0fe"} Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.202644 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerStarted","Data":"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5"} Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.207267 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerStarted","Data":"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a"} Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.207317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerStarted","Data":"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6"} Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.223699 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.769749105 podStartE2EDuration="8.223684289s" podCreationTimestamp="2025-11-23 07:07:12 +0000 UTC" firstStartedPulling="2025-11-23 07:07:13.490425711 +0000 UTC m=+1285.798938474" lastFinishedPulling="2025-11-23 07:07:17.944360895 +0000 UTC m=+1290.252873658" observedRunningTime="2025-11-23 07:07:20.221453016 +0000 UTC m=+1292.529965789" watchObservedRunningTime="2025-11-23 07:07:20.223684289 +0000 UTC m=+1292.532197052" Nov 23 07:07:20 crc kubenswrapper[4988]: I1123 07:07:20.251607 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68dbd6466f-n6f5g" podStartSLOduration=3.251587623 podStartE2EDuration="3.251587623s" podCreationTimestamp="2025-11-23 07:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:20.24474247 +0000 UTC m=+1292.553255243" watchObservedRunningTime="2025-11-23 07:07:20.251587623 +0000 UTC m=+1292.560100386" Nov 23 07:07:21 crc kubenswrapper[4988]: I1123 07:07:21.221574 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:21 crc kubenswrapper[4988]: I1123 07:07:21.672232 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:07:21 crc kubenswrapper[4988]: I1123 07:07:21.672280 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.074726 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.076681 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.079133 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.079737 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.085816 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.202877 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.202951 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkmz\" (UniqueName: \"kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.203027 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.203075 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.203096 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.203143 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.203184 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304221 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304318 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304338 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304392 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304450 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.304475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkmz\" (UniqueName: \"kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.305629 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.309272 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.309349 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.313841 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.318943 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.326930 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkmz\" (UniqueName: \"kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.339543 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data\") pod \"barbican-api-f7fdc8956-g6vw5\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.407059 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.450080 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:07:22 crc kubenswrapper[4988]: I1123 07:07:22.952319 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:07:23 crc kubenswrapper[4988]: I1123 07:07:23.242338 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerStarted","Data":"8afa739f5110059b465e6885ff5859dc6184950082e63a26a379905cd8929a41"} Nov 23 07:07:23 crc kubenswrapper[4988]: I1123 07:07:23.242724 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerStarted","Data":"ea8da0111c8587b59ae12fccf4684321f6bb6b2d51128c9eefb28473c72a5363"} Nov 23 07:07:23 crc kubenswrapper[4988]: I1123 07:07:23.521544 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:23 crc kubenswrapper[4988]: I1123 07:07:23.680936 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:24 crc kubenswrapper[4988]: I1123 07:07:24.189625 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:07:24 crc kubenswrapper[4988]: I1123 07:07:24.260173 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerStarted","Data":"9177522bc27ecd27e87dbfaffac6a6f6968557f56ceb08013aef630c895b62fb"} Nov 23 07:07:25 crc kubenswrapper[4988]: I1123 07:07:25.140569 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:07:25 crc kubenswrapper[4988]: I1123 07:07:25.162646 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f7fdc8956-g6vw5" podStartSLOduration=3.162627131 podStartE2EDuration="3.162627131s" podCreationTimestamp="2025-11-23 07:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:24.285111386 +0000 UTC m=+1296.593624149" watchObservedRunningTime="2025-11-23 07:07:25.162627131 +0000 UTC m=+1297.471139894" Nov 23 07:07:25 crc kubenswrapper[4988]: I1123 07:07:25.275215 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:25 crc kubenswrapper[4988]: I1123 07:07:25.275254 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:27 crc kubenswrapper[4988]: I1123 07:07:27.607366 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:07:27 crc kubenswrapper[4988]: I1123 07:07:27.682089 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:07:27 crc kubenswrapper[4988]: I1123 07:07:27.683098 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="dnsmasq-dns" containerID="cri-o://ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61" gracePeriod=10 Nov 23 07:07:27 crc kubenswrapper[4988]: I1123 07:07:27.818790 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:07:27 crc kubenswrapper[4988]: I1123 07:07:27.864077 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.257664 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.305354 4988 generic.go:334] "Generic (PLEG): container finished" podID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerID="ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61" exitCode=0 Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.305783 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" event={"ID":"8290145a-df4b-4381-81d8-d2ce14d105fd","Type":"ContainerDied","Data":"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61"} Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.305845 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" event={"ID":"8290145a-df4b-4381-81d8-d2ce14d105fd","Type":"ContainerDied","Data":"e2e7ddfa884db486417f376bfeeaf02c7ccc2ebf2596e86338486a64d6048707"} Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.305865 4988 scope.go:117] "RemoveContainer" containerID="ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.305780 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.306130 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="cinder-scheduler" containerID="cri-o://fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181" gracePeriod=30 Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.307135 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="probe" containerID="cri-o://cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5" gracePeriod=30 Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.316759 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv5br\" (UniqueName: \"kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.316904 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.316932 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.317039 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.317093 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.317122 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb\") pod \"8290145a-df4b-4381-81d8-d2ce14d105fd\" (UID: \"8290145a-df4b-4381-81d8-d2ce14d105fd\") " Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.335148 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br" (OuterVolumeSpecName: "kube-api-access-lv5br") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "kube-api-access-lv5br". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.350294 4988 scope.go:117] "RemoveContainer" containerID="d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.365907 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config" (OuterVolumeSpecName: "config") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.385790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.426315 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.426347 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.426361 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv5br\" (UniqueName: \"kubernetes.io/projected/8290145a-df4b-4381-81d8-d2ce14d105fd-kube-api-access-lv5br\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.431539 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.445503 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.462858 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8290145a-df4b-4381-81d8-d2ce14d105fd" (UID: "8290145a-df4b-4381-81d8-d2ce14d105fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.463268 4988 scope.go:117] "RemoveContainer" containerID="ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61" Nov 23 07:07:28 crc kubenswrapper[4988]: E1123 07:07:28.464583 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61\": container with ID starting with ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61 not found: ID does not exist" containerID="ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.464628 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61"} err="failed to get container status \"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61\": rpc error: code = NotFound desc = could not find container \"ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61\": container with ID starting with ba659233d06a45385737c700e40570e0a1840688e9691be7dacb15a57542bb61 not found: ID does not exist" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.464649 4988 scope.go:117] "RemoveContainer" containerID="d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f" Nov 23 07:07:28 crc kubenswrapper[4988]: E1123 07:07:28.465693 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f\": container with ID starting with d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f not found: ID does not exist" containerID="d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.465736 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f"} err="failed to get container status \"d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f\": rpc error: code = NotFound desc = could not find container \"d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f\": container with ID starting with d43ae24136f10032df7a69cf82b695e31dac0b5f3b4e56218f38e3901c17aa1f not found: ID does not exist" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.527755 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.527802 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.527816 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8290145a-df4b-4381-81d8-d2ce14d105fd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.627829 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:07:28 crc kubenswrapper[4988]: I1123 07:07:28.638156 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-5mmkw"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.258347 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: E1123 07:07:29.258809 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="init" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.258834 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="init" Nov 23 07:07:29 crc kubenswrapper[4988]: E1123 07:07:29.258868 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="dnsmasq-dns" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.258876 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="dnsmasq-dns" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.259050 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="dnsmasq-dns" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.259670 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.263657 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.266206 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-x7pxc" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.266964 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.278317 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.332028 4988 generic.go:334] "Generic (PLEG): container finished" podID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerID="cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5" exitCode=0 Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.332643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerDied","Data":"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5"} Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.340967 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.341026 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt5w\" (UniqueName: \"kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.341059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.341110 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.443131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.443295 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.443327 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtt5w\" (UniqueName: \"kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.443358 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.444363 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.449520 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.457657 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.462791 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtt5w\" (UniqueName: \"kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w\") pod \"openstackclient\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.477102 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.477926 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.485276 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.536868 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.538405 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.540232 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.647166 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.647459 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.647502 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.647544 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: E1123 07:07:29.711679 4988 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 23 07:07:29 crc kubenswrapper[4988]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_c7bf6b82-46b9-4e89-a872-974f33c50df3_0(c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78" Netns:"/var/run/netns/710f368f-54c9-45dd-93df-f03b3017aacb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78;K8S_POD_UID=c7bf6b82-46b9-4e89-a872-974f33c50df3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/c7bf6b82-46b9-4e89-a872-974f33c50df3]: expected pod UID "c7bf6b82-46b9-4e89-a872-974f33c50df3" but got "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" from Kube API Nov 23 07:07:29 crc kubenswrapper[4988]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 23 07:07:29 crc kubenswrapper[4988]: > Nov 23 07:07:29 crc kubenswrapper[4988]: E1123 07:07:29.711745 4988 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 23 07:07:29 crc kubenswrapper[4988]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_c7bf6b82-46b9-4e89-a872-974f33c50df3_0(c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78" Netns:"/var/run/netns/710f368f-54c9-45dd-93df-f03b3017aacb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c8157a23e7bea316237104e393a5305203522e989b5e788df3f12fbcedaf6b78;K8S_POD_UID=c7bf6b82-46b9-4e89-a872-974f33c50df3" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/c7bf6b82-46b9-4e89-a872-974f33c50df3]: expected pod UID "c7bf6b82-46b9-4e89-a872-974f33c50df3" but got "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" from Kube API Nov 23 07:07:29 crc kubenswrapper[4988]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 23 07:07:29 crc kubenswrapper[4988]: > pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.749345 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.749453 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.749491 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.749521 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.750549 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.753402 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.754532 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.775760 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp\") pod \"openstackclient\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.894055 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.895549 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.900643 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.900739 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.900769 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.916878 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.924262 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953594 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953718 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953773 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953809 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953838 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953862 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:29 crc kubenswrapper[4988]: I1123 07:07:29.953881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4mv\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.055227 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.055700 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4mv\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.055740 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.055800 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.056832 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.057499 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.057635 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.057692 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.057737 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.061625 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.061892 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.072055 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.077725 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.078004 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.087100 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.092005 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4mv\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv\") pod \"swift-proxy-6bfdb6f865-pn8fq\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.229366 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.350030 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.357660 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c7bf6b82-46b9-4e89-a872-974f33c50df3" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.375793 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.442406 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.466962 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtt5w\" (UniqueName: \"kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w\") pod \"c7bf6b82-46b9-4e89-a872-974f33c50df3\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.467036 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret\") pod \"c7bf6b82-46b9-4e89-a872-974f33c50df3\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.467095 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle\") pod \"c7bf6b82-46b9-4e89-a872-974f33c50df3\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.467136 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config\") pod \"c7bf6b82-46b9-4e89-a872-974f33c50df3\" (UID: \"c7bf6b82-46b9-4e89-a872-974f33c50df3\") " Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.469560 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c7bf6b82-46b9-4e89-a872-974f33c50df3" (UID: "c7bf6b82-46b9-4e89-a872-974f33c50df3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.479421 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c7bf6b82-46b9-4e89-a872-974f33c50df3" (UID: "c7bf6b82-46b9-4e89-a872-974f33c50df3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.479633 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7bf6b82-46b9-4e89-a872-974f33c50df3" (UID: "c7bf6b82-46b9-4e89-a872-974f33c50df3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.493533 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w" (OuterVolumeSpecName: "kube-api-access-rtt5w") pod "c7bf6b82-46b9-4e89-a872-974f33c50df3" (UID: "c7bf6b82-46b9-4e89-a872-974f33c50df3"). InnerVolumeSpecName "kube-api-access-rtt5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.516479 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" path="/var/lib/kubelet/pods/8290145a-df4b-4381-81d8-d2ce14d105fd/volumes" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.517330 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bf6b82-46b9-4e89-a872-974f33c50df3" path="/var/lib/kubelet/pods/c7bf6b82-46b9-4e89-a872-974f33c50df3/volumes" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.559037 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.569721 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtt5w\" (UniqueName: \"kubernetes.io/projected/c7bf6b82-46b9-4e89-a872-974f33c50df3-kube-api-access-rtt5w\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.569971 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.569987 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bf6b82-46b9-4e89-a872-974f33c50df3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.569996 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7bf6b82-46b9-4e89-a872-974f33c50df3-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:30 crc kubenswrapper[4988]: I1123 07:07:30.843224 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.056599 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179560 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179639 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179671 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179745 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179767 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v2j7\" (UniqueName: \"kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.179817 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom\") pod \"28630714-a49b-4cb2-a9cc-484f47b83b74\" (UID: \"28630714-a49b-4cb2-a9cc-484f47b83b74\") " Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.182777 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.184038 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.188450 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts" (OuterVolumeSpecName: "scripts") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.190596 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7" (OuterVolumeSpecName: "kube-api-access-8v2j7") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "kube-api-access-8v2j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.250361 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.282563 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.282601 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.282613 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.282624 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28630714-a49b-4cb2-a9cc-484f47b83b74-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.282635 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v2j7\" (UniqueName: \"kubernetes.io/projected/28630714-a49b-4cb2-a9cc-484f47b83b74-kube-api-access-8v2j7\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.321780 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data" (OuterVolumeSpecName: "config-data") pod "28630714-a49b-4cb2-a9cc-484f47b83b74" (UID: "28630714-a49b-4cb2-a9cc-484f47b83b74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.362471 4988 generic.go:334] "Generic (PLEG): container finished" podID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerID="fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181" exitCode=0 Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.362555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerDied","Data":"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.362562 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.362589 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"28630714-a49b-4cb2-a9cc-484f47b83b74","Type":"ContainerDied","Data":"14eee15dd2c6a39a9464a75b0e14e527175245b62811af9b00a9306b9bd9de0f"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.362613 4988 scope.go:117] "RemoveContainer" containerID="cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.368087 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerStarted","Data":"6c41f5805b7efdab5e67edf42677c7e686ae567b5ac0613407643871e1d427b1"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.368144 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerStarted","Data":"34e7480000062695ebf573eedfae499c0ceef26baec215a617dadbdff949c05f"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.368160 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerStarted","Data":"fc6d1b0bada9dcd1d8cfa49893b8ed0749f5c923315de1a1c048614e299a2410"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.368748 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.368784 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.370441 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.370411 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4","Type":"ContainerStarted","Data":"4f923edc9b05971a7d5fb5887c8a50557ad21a567352d7073dedb45d936522d5"} Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.386109 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28630714-a49b-4cb2-a9cc-484f47b83b74-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.389761 4988 scope.go:117] "RemoveContainer" containerID="fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.398028 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" podStartSLOduration=2.398005883 podStartE2EDuration="2.398005883s" podCreationTimestamp="2025-11-23 07:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:31.390096645 +0000 UTC m=+1303.698609418" watchObservedRunningTime="2025-11-23 07:07:31.398005883 +0000 UTC m=+1303.706518646" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.407033 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="c7bf6b82-46b9-4e89-a872-974f33c50df3" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.415006 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.421918 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.455112 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:31 crc kubenswrapper[4988]: E1123 07:07:31.455657 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="cinder-scheduler" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.455682 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="cinder-scheduler" Nov 23 07:07:31 crc kubenswrapper[4988]: E1123 07:07:31.455717 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="probe" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.455725 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="probe" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.455941 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="cinder-scheduler" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.455977 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" containerName="probe" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.456646 4988 scope.go:117] "RemoveContainer" containerID="cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.457164 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: E1123 07:07:31.457332 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5\": container with ID starting with cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5 not found: ID does not exist" containerID="cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.457376 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5"} err="failed to get container status \"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5\": rpc error: code = NotFound desc = could not find container \"cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5\": container with ID starting with cb54e45a65a94f3caa7a843f0a6393d0d1052dc689dae274c5faf09d3fff62f5 not found: ID does not exist" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.457408 4988 scope.go:117] "RemoveContainer" containerID="fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181" Nov 23 07:07:31 crc kubenswrapper[4988]: E1123 07:07:31.457751 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181\": container with ID starting with fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181 not found: ID does not exist" containerID="fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.457808 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181"} err="failed to get container status \"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181\": rpc error: code = NotFound desc = could not find container \"fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181\": container with ID starting with fe9dcf35c4b98f05b0687ef06df4f1b91abc846ac26f7a67dd2ee828094df181 not found: ID does not exist" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.463417 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.467800 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487302 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487494 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487592 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487636 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.487676 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kj8\" (UniqueName: \"kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.589361 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.589450 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.590252 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9kj8\" (UniqueName: \"kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.590709 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.590848 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.590897 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.591282 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.595129 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.595776 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.597493 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.599897 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.607043 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9kj8\" (UniqueName: \"kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8\") pod \"cinder-scheduler-0\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " pod="openstack/cinder-scheduler-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.686744 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.687033 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-central-agent" containerID="cri-o://a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e" gracePeriod=30 Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.687421 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="sg-core" containerID="cri-o://b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b" gracePeriod=30 Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.687494 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-notification-agent" containerID="cri-o://edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc" gracePeriod=30 Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.687580 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="proxy-httpd" containerID="cri-o://1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796" gracePeriod=30 Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.691663 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:07:31 crc kubenswrapper[4988]: I1123 07:07:31.788675 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384488 4988 generic.go:334] "Generic (PLEG): container finished" podID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerID="1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796" exitCode=0 Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384835 4988 generic.go:334] "Generic (PLEG): container finished" podID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerID="b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b" exitCode=2 Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384851 4988 generic.go:334] "Generic (PLEG): container finished" podID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerID="a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e" exitCode=0 Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384545 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerDied","Data":"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796"} Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerDied","Data":"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b"} Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.384990 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerDied","Data":"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e"} Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.416168 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:07:32 crc kubenswrapper[4988]: W1123 07:07:32.445167 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2f5e1e6_0051_487f_b9ca_76003e7deed1.slice/crio-574d8a84526d6e5d16672b7ef4cc64cf6eaf4d91ca66342e55524186ddac960b WatchSource:0}: Error finding container 574d8a84526d6e5d16672b7ef4cc64cf6eaf4d91ca66342e55524186ddac960b: Status 404 returned error can't find the container with id 574d8a84526d6e5d16672b7ef4cc64cf6eaf4d91ca66342e55524186ddac960b Nov 23 07:07:32 crc kubenswrapper[4988]: I1123 07:07:32.508784 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28630714-a49b-4cb2-a9cc-484f47b83b74" path="/var/lib/kubelet/pods/28630714-a49b-4cb2-a9cc-484f47b83b74/volumes" Nov 23 07:07:33 crc kubenswrapper[4988]: I1123 07:07:33.049555 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76c58b6d97-5mmkw" podUID="8290145a-df4b-4381-81d8-d2ce14d105fd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Nov 23 07:07:33 crc kubenswrapper[4988]: I1123 07:07:33.395059 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerStarted","Data":"574d8a84526d6e5d16672b7ef4cc64cf6eaf4d91ca66342e55524186ddac960b"} Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.416404 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerStarted","Data":"03e37d124318dbc7bdae86e68e8a56352fe7f075c2540608bb85d91aa3b8f04d"} Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.509493 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.628899 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.744577 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.744799 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-67cf57977d-cbhzp" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api-log" containerID="cri-o://84855dbb74b676ed94de2f2ad4b575dd9058d91aa6c4efcfc3d5657bbb17aa8c" gracePeriod=30 Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.745404 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-67cf57977d-cbhzp" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api" containerID="cri-o://268fb0bd0739fa78106d36525c18afa10cb4d516d25d1869d4a502190f90012b" gracePeriod=30 Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.881530 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960457 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960573 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960606 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgt2\" (UniqueName: \"kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960678 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960849 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960909 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.960931 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle\") pod \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\" (UID: \"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f\") " Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.963722 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.963759 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.970241 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts" (OuterVolumeSpecName: "scripts") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:34 crc kubenswrapper[4988]: I1123 07:07:34.978352 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2" (OuterVolumeSpecName: "kube-api-access-9cgt2") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "kube-api-access-9cgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.036368 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.067451 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.067488 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.067498 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.067508 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.067523 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cgt2\" (UniqueName: \"kubernetes.io/projected/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-kube-api-access-9cgt2\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.144364 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data" (OuterVolumeSpecName: "config-data") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.169063 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.179800 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" (UID: "5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.271274 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.426672 4988 generic.go:334] "Generic (PLEG): container finished" podID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerID="edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc" exitCode=0 Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.426717 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.426737 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerDied","Data":"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc"} Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.426764 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f","Type":"ContainerDied","Data":"67286e64b93672ad61c0ce1537ac5bb65b23a7372dc1abbac31d66134393bf4c"} Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.426780 4988 scope.go:117] "RemoveContainer" containerID="1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.441065 4988 generic.go:334] "Generic (PLEG): container finished" podID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerID="84855dbb74b676ed94de2f2ad4b575dd9058d91aa6c4efcfc3d5657bbb17aa8c" exitCode=143 Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.441139 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerDied","Data":"84855dbb74b676ed94de2f2ad4b575dd9058d91aa6c4efcfc3d5657bbb17aa8c"} Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.443780 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerStarted","Data":"df780d567dc75d2a747323c194d8edc1f1c8620703d8cf00e07d978a89c64cd2"} Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.457928 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.472614 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.474385 4988 scope.go:117] "RemoveContainer" containerID="b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.482465 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.482444109 podStartE2EDuration="4.482444109s" podCreationTimestamp="2025-11-23 07:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:35.480848341 +0000 UTC m=+1307.789361104" watchObservedRunningTime="2025-11-23 07:07:35.482444109 +0000 UTC m=+1307.790956872" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.566284 4988 scope.go:117] "RemoveContainer" containerID="edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.570999 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.571727 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-central-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.571744 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-central-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.571767 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="proxy-httpd" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.571775 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="proxy-httpd" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.571803 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-notification-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.571809 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-notification-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.571826 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="sg-core" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.571831 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="sg-core" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.572364 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-central-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.572392 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="proxy-httpd" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.572413 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="sg-core" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.572432 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" containerName="ceilometer-notification-agent" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.578116 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.585777 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.585836 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.605439 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.630420 4988 scope.go:117] "RemoveContainer" containerID="a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.668391 4988 scope.go:117] "RemoveContainer" containerID="1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.669813 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796\": container with ID starting with 1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796 not found: ID does not exist" containerID="1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.669867 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796"} err="failed to get container status \"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796\": rpc error: code = NotFound desc = could not find container \"1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796\": container with ID starting with 1022cffa75c75b180bf0cc2520389de50f4322739d286aeb9332f94884918796 not found: ID does not exist" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.669895 4988 scope.go:117] "RemoveContainer" containerID="b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.671450 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b\": container with ID starting with b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b not found: ID does not exist" containerID="b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.671549 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b"} err="failed to get container status \"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b\": rpc error: code = NotFound desc = could not find container \"b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b\": container with ID starting with b71511ee0756256b33a68b5fbe718b7fc075e82a6d79133e77c618fd1cdef01b not found: ID does not exist" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.671579 4988 scope.go:117] "RemoveContainer" containerID="edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.673209 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc\": container with ID starting with edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc not found: ID does not exist" containerID="edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.673251 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc"} err="failed to get container status \"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc\": rpc error: code = NotFound desc = could not find container \"edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc\": container with ID starting with edb7016e233592bc5e8a187b107d2ce8fdd21ae1bf6b9e665c1c7e59ddd0eebc not found: ID does not exist" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.673277 4988 scope.go:117] "RemoveContainer" containerID="a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e" Nov 23 07:07:35 crc kubenswrapper[4988]: E1123 07:07:35.676523 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e\": container with ID starting with a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e not found: ID does not exist" containerID="a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.676596 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e"} err="failed to get container status \"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e\": rpc error: code = NotFound desc = could not find container \"a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e\": container with ID starting with a092c53541828db0f882b7046784312cdc2f1d9b8f6e6e248527cb41b1b0c85e not found: ID does not exist" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691371 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691443 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691552 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691590 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691630 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9mr\" (UniqueName: \"kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691688 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.691912 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793040 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793124 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793152 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793175 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793555 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793599 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v9mr\" (UniqueName: \"kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.793630 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.797872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.801449 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.803711 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.804243 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.811094 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.820324 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.828540 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v9mr\" (UniqueName: \"kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr\") pod \"ceilometer-0\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " pod="openstack/ceilometer-0" Nov 23 07:07:35 crc kubenswrapper[4988]: I1123 07:07:35.923142 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:36 crc kubenswrapper[4988]: I1123 07:07:36.451959 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:36 crc kubenswrapper[4988]: W1123 07:07:36.464084 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab161b00_9a11_424a_ac7e_4201c5f2159b.slice/crio-9e2709f918b521dbb88a1b767c3837bb6b24491494da539ccdc0bc845e82e338 WatchSource:0}: Error finding container 9e2709f918b521dbb88a1b767c3837bb6b24491494da539ccdc0bc845e82e338: Status 404 returned error can't find the container with id 9e2709f918b521dbb88a1b767c3837bb6b24491494da539ccdc0bc845e82e338 Nov 23 07:07:36 crc kubenswrapper[4988]: I1123 07:07:36.509780 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f" path="/var/lib/kubelet/pods/5fdd6baa-b0e9-4a41-a13d-9b7f15e14e9f/volumes" Nov 23 07:07:36 crc kubenswrapper[4988]: I1123 07:07:36.791274 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:07:37 crc kubenswrapper[4988]: I1123 07:07:37.471183 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerStarted","Data":"9e2709f918b521dbb88a1b767c3837bb6b24491494da539ccdc0bc845e82e338"} Nov 23 07:07:37 crc kubenswrapper[4988]: I1123 07:07:37.969554 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-67cf57977d-cbhzp" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:38780->10.217.0.157:9311: read: connection reset by peer" Nov 23 07:07:37 crc kubenswrapper[4988]: I1123 07:07:37.969605 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-67cf57977d-cbhzp" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:38796->10.217.0.157:9311: read: connection reset by peer" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.487623 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerStarted","Data":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.503681 4988 generic.go:334] "Generic (PLEG): container finished" podID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerID="268fb0bd0739fa78106d36525c18afa10cb4d516d25d1869d4a502190f90012b" exitCode=0 Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.513272 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerDied","Data":"268fb0bd0739fa78106d36525c18afa10cb4d516d25d1869d4a502190f90012b"} Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.605273 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.663689 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data\") pod \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.663759 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle\") pod \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.663885 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs\") pod \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.663953 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzhwz\" (UniqueName: \"kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz\") pod \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.664018 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom\") pod \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\" (UID: \"d7abd5c6-9353-4af5-bcc3-9d6e66517978\") " Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.664931 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs" (OuterVolumeSpecName: "logs") pod "d7abd5c6-9353-4af5-bcc3-9d6e66517978" (UID: "d7abd5c6-9353-4af5-bcc3-9d6e66517978"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.682406 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz" (OuterVolumeSpecName: "kube-api-access-xzhwz") pod "d7abd5c6-9353-4af5-bcc3-9d6e66517978" (UID: "d7abd5c6-9353-4af5-bcc3-9d6e66517978"). InnerVolumeSpecName "kube-api-access-xzhwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.682385 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d7abd5c6-9353-4af5-bcc3-9d6e66517978" (UID: "d7abd5c6-9353-4af5-bcc3-9d6e66517978"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.709799 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7abd5c6-9353-4af5-bcc3-9d6e66517978" (UID: "d7abd5c6-9353-4af5-bcc3-9d6e66517978"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.733008 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data" (OuterVolumeSpecName: "config-data") pod "d7abd5c6-9353-4af5-bcc3-9d6e66517978" (UID: "d7abd5c6-9353-4af5-bcc3-9d6e66517978"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.766751 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.767098 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.767115 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7abd5c6-9353-4af5-bcc3-9d6e66517978-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.767126 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzhwz\" (UniqueName: \"kubernetes.io/projected/d7abd5c6-9353-4af5-bcc3-9d6e66517978-kube-api-access-xzhwz\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:38 crc kubenswrapper[4988]: I1123 07:07:38.767136 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7abd5c6-9353-4af5-bcc3-9d6e66517978-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:39 crc kubenswrapper[4988]: I1123 07:07:39.516610 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67cf57977d-cbhzp" event={"ID":"d7abd5c6-9353-4af5-bcc3-9d6e66517978","Type":"ContainerDied","Data":"e3abb95e97ed6a83213651f73d720d5348771aa07f4e11cf90b4ee1a7d84cb4c"} Nov 23 07:07:39 crc kubenswrapper[4988]: I1123 07:07:39.516658 4988 scope.go:117] "RemoveContainer" containerID="268fb0bd0739fa78106d36525c18afa10cb4d516d25d1869d4a502190f90012b" Nov 23 07:07:39 crc kubenswrapper[4988]: I1123 07:07:39.516665 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67cf57977d-cbhzp" Nov 23 07:07:39 crc kubenswrapper[4988]: I1123 07:07:39.586863 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:39 crc kubenswrapper[4988]: I1123 07:07:39.607032 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-67cf57977d-cbhzp"] Nov 23 07:07:40 crc kubenswrapper[4988]: I1123 07:07:40.288821 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:40 crc kubenswrapper[4988]: I1123 07:07:40.290678 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:07:40 crc kubenswrapper[4988]: I1123 07:07:40.511271 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" path="/var/lib/kubelet/pods/d7abd5c6-9353-4af5-bcc3-9d6e66517978/volumes" Nov 23 07:07:41 crc kubenswrapper[4988]: I1123 07:07:41.592033 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:41 crc kubenswrapper[4988]: I1123 07:07:41.872691 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:42 crc kubenswrapper[4988]: I1123 07:07:42.012852 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:07:44 crc kubenswrapper[4988]: I1123 07:07:44.617280 4988 scope.go:117] "RemoveContainer" containerID="84855dbb74b676ed94de2f2ad4b575dd9058d91aa6c4efcfc3d5657bbb17aa8c" Nov 23 07:07:45 crc kubenswrapper[4988]: I1123 07:07:45.574435 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4","Type":"ContainerStarted","Data":"a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9"} Nov 23 07:07:45 crc kubenswrapper[4988]: I1123 07:07:45.577708 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerStarted","Data":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} Nov 23 07:07:45 crc kubenswrapper[4988]: I1123 07:07:45.602429 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.448482846 podStartE2EDuration="16.602401959s" podCreationTimestamp="2025-11-23 07:07:29 +0000 UTC" firstStartedPulling="2025-11-23 07:07:30.564791854 +0000 UTC m=+1302.873304617" lastFinishedPulling="2025-11-23 07:07:44.718710967 +0000 UTC m=+1317.027223730" observedRunningTime="2025-11-23 07:07:45.593037376 +0000 UTC m=+1317.901550159" watchObservedRunningTime="2025-11-23 07:07:45.602401959 +0000 UTC m=+1317.910914732" Nov 23 07:07:46 crc kubenswrapper[4988]: I1123 07:07:46.591812 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerStarted","Data":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.611474 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerStarted","Data":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.612243 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-central-agent" containerID="cri-o://c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" gracePeriod=30 Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.612433 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.612541 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="proxy-httpd" containerID="cri-o://a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" gracePeriod=30 Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.612598 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="sg-core" containerID="cri-o://ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" gracePeriod=30 Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.612563 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-notification-agent" containerID="cri-o://de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" gracePeriod=30 Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.649052 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.093433399 podStartE2EDuration="12.649034161s" podCreationTimestamp="2025-11-23 07:07:35 +0000 UTC" firstStartedPulling="2025-11-23 07:07:36.469540554 +0000 UTC m=+1308.778053317" lastFinishedPulling="2025-11-23 07:07:47.025141316 +0000 UTC m=+1319.333654079" observedRunningTime="2025-11-23 07:07:47.645849076 +0000 UTC m=+1319.954361839" watchObservedRunningTime="2025-11-23 07:07:47.649034161 +0000 UTC m=+1319.957546924" Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.713978 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.714438 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-log" containerID="cri-o://b6e5acca2641d8e9a3eda7015fc4fdfb6bc3436733445b23cf69c1ae74733c20" gracePeriod=30 Nov 23 07:07:47 crc kubenswrapper[4988]: I1123 07:07:47.715002 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-httpd" containerID="cri-o://b0af7e44f04e408944e7c1b2eef12ae141deb9974fd6480160b622ff9e9d9379" gracePeriod=30 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.006335 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.073648 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.074205 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-97bc9b8d4-j6jbm" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-api" containerID="cri-o://a9147a590705da748bcca443317184e83e47a36d614d35acb8aa413769488875" gracePeriod=30 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.074508 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-97bc9b8d4-j6jbm" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-httpd" containerID="cri-o://c2342eb37abcdebf788ea07b47f06fb4035f9c416f9399acc7b408319740da2d" gracePeriod=30 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.533469 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.623787 4988 generic.go:334] "Generic (PLEG): container finished" podID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerID="c2342eb37abcdebf788ea07b47f06fb4035f9c416f9399acc7b408319740da2d" exitCode=0 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.623841 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerDied","Data":"c2342eb37abcdebf788ea07b47f06fb4035f9c416f9399acc7b408319740da2d"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.625512 4988 generic.go:334] "Generic (PLEG): container finished" podID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerID="b6e5acca2641d8e9a3eda7015fc4fdfb6bc3436733445b23cf69c1ae74733c20" exitCode=143 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.625548 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerDied","Data":"b6e5acca2641d8e9a3eda7015fc4fdfb6bc3436733445b23cf69c1ae74733c20"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627852 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" exitCode=0 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627867 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" exitCode=2 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627875 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" exitCode=0 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627881 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" exitCode=0 Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627893 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerDied","Data":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerDied","Data":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627915 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerDied","Data":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627923 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerDied","Data":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627933 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab161b00-9a11-424a-ac7e-4201c5f2159b","Type":"ContainerDied","Data":"9e2709f918b521dbb88a1b767c3837bb6b24491494da539ccdc0bc845e82e338"} Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.627947 4988 scope.go:117] "RemoveContainer" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.628058 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.653671 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.653818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.653871 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.653937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.654003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.654059 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v9mr\" (UniqueName: \"kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.654109 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts\") pod \"ab161b00-9a11-424a-ac7e-4201c5f2159b\" (UID: \"ab161b00-9a11-424a-ac7e-4201c5f2159b\") " Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.655703 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.657074 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.661639 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts" (OuterVolumeSpecName: "scripts") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.664363 4988 scope.go:117] "RemoveContainer" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.677614 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr" (OuterVolumeSpecName: "kube-api-access-8v9mr") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "kube-api-access-8v9mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.697343 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.743396 4988 scope.go:117] "RemoveContainer" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.755733 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v9mr\" (UniqueName: \"kubernetes.io/projected/ab161b00-9a11-424a-ac7e-4201c5f2159b-kube-api-access-8v9mr\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.755835 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.755891 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.755964 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab161b00-9a11-424a-ac7e-4201c5f2159b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.756023 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.760796 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data" (OuterVolumeSpecName: "config-data") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.766608 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab161b00-9a11-424a-ac7e-4201c5f2159b" (UID: "ab161b00-9a11-424a-ac7e-4201c5f2159b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.779547 4988 scope.go:117] "RemoveContainer" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.822524 4988 scope.go:117] "RemoveContainer" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.825336 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": container with ID starting with a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13 not found: ID does not exist" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.825380 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} err="failed to get container status \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": rpc error: code = NotFound desc = could not find container \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": container with ID starting with a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.825408 4988 scope.go:117] "RemoveContainer" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.826214 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": container with ID starting with ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d not found: ID does not exist" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.826252 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} err="failed to get container status \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": rpc error: code = NotFound desc = could not find container \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": container with ID starting with ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.826286 4988 scope.go:117] "RemoveContainer" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.826782 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": container with ID starting with de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d not found: ID does not exist" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.826814 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} err="failed to get container status \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": rpc error: code = NotFound desc = could not find container \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": container with ID starting with de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.826831 4988 scope.go:117] "RemoveContainer" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.827523 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": container with ID starting with c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05 not found: ID does not exist" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.827549 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} err="failed to get container status \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": rpc error: code = NotFound desc = could not find container \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": container with ID starting with c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.827565 4988 scope.go:117] "RemoveContainer" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.827948 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} err="failed to get container status \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": rpc error: code = NotFound desc = could not find container \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": container with ID starting with a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.827968 4988 scope.go:117] "RemoveContainer" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828173 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} err="failed to get container status \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": rpc error: code = NotFound desc = could not find container \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": container with ID starting with ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828208 4988 scope.go:117] "RemoveContainer" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828395 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} err="failed to get container status \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": rpc error: code = NotFound desc = could not find container \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": container with ID starting with de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828412 4988 scope.go:117] "RemoveContainer" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828594 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} err="failed to get container status \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": rpc error: code = NotFound desc = could not find container \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": container with ID starting with c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828613 4988 scope.go:117] "RemoveContainer" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828856 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} err="failed to get container status \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": rpc error: code = NotFound desc = could not find container \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": container with ID starting with a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.828878 4988 scope.go:117] "RemoveContainer" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829114 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} err="failed to get container status \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": rpc error: code = NotFound desc = could not find container \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": container with ID starting with ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829133 4988 scope.go:117] "RemoveContainer" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829623 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} err="failed to get container status \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": rpc error: code = NotFound desc = could not find container \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": container with ID starting with de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829642 4988 scope.go:117] "RemoveContainer" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829929 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} err="failed to get container status \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": rpc error: code = NotFound desc = could not find container \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": container with ID starting with c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.829950 4988 scope.go:117] "RemoveContainer" containerID="a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.830273 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13"} err="failed to get container status \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": rpc error: code = NotFound desc = could not find container \"a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13\": container with ID starting with a0c5a1baa65d44dc0a15bc50115d5bf5b44e104a3caba8ae2cb62e6ba5ac0f13 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.830313 4988 scope.go:117] "RemoveContainer" containerID="ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.830752 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d"} err="failed to get container status \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": rpc error: code = NotFound desc = could not find container \"ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d\": container with ID starting with ed6abcb749118974f929b8f6bd00697f6dd9fe6094a8c431fe65c978a280346d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.830779 4988 scope.go:117] "RemoveContainer" containerID="de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.831909 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d"} err="failed to get container status \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": rpc error: code = NotFound desc = could not find container \"de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d\": container with ID starting with de6aed9a5efca9204934e1403f2ba173db2264d55c4658c804ad79b11283867d not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.831934 4988 scope.go:117] "RemoveContainer" containerID="c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.832387 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05"} err="failed to get container status \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": rpc error: code = NotFound desc = could not find container \"c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05\": container with ID starting with c1079d07001772065729805ac3b6a508c843682de27d1ad2939df9017e90cb05 not found: ID does not exist" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.857515 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.857544 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab161b00-9a11-424a-ac7e-4201c5f2159b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.957101 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.964611 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.984262 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.984842 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-central-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.984938 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-central-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.985012 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="sg-core" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985085 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="sg-core" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.985175 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api-log" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985273 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api-log" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.985349 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-notification-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985416 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-notification-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.985495 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985551 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api" Nov 23 07:07:48 crc kubenswrapper[4988]: E1123 07:07:48.985610 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="proxy-httpd" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985659 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="proxy-httpd" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985884 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-notification-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.985951 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="ceilometer-central-agent" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.986010 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api-log" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.986160 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="sg-core" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.986249 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" containerName="proxy-httpd" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.986323 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7abd5c6-9353-4af5-bcc3-9d6e66517978" containerName="barbican-api" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.987998 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.994335 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:07:48 crc kubenswrapper[4988]: I1123 07:07:48.999589 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.007287 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165206 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165265 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165303 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165346 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl7j\" (UniqueName: \"kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165390 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165414 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.165440 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267103 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl7j\" (UniqueName: \"kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267163 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267188 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267227 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267280 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267309 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.267337 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.268937 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.269527 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.273162 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.275921 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.288033 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.293799 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.293987 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl7j\" (UniqueName: \"kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j\") pod \"ceilometer-0\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.304801 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.651410 4988 generic.go:334] "Generic (PLEG): container finished" podID="2b1718ea-f44e-41f4-9229-80af33e66280" containerID="3adeffdd5d441664c518d04f180da9de8589fc93a41633d4cc3e6535f1bc0de0" exitCode=137 Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.651479 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerDied","Data":"3adeffdd5d441664c518d04f180da9de8589fc93a41633d4cc3e6535f1bc0de0"} Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.769589 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.771269 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-log" containerID="cri-o://2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd" gracePeriod=30 Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.771392 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-httpd" containerID="cri-o://897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70" gracePeriod=30 Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.797361 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.858497 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997409 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997470 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997504 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997529 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997579 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkl8\" (UniqueName: \"kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997600 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997627 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle\") pod \"2b1718ea-f44e-41f4-9229-80af33e66280\" (UID: \"2b1718ea-f44e-41f4-9229-80af33e66280\") " Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997626 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.997914 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b1718ea-f44e-41f4-9229-80af33e66280-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:49 crc kubenswrapper[4988]: I1123 07:07:49.998516 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs" (OuterVolumeSpecName: "logs") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.003966 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.004909 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts" (OuterVolumeSpecName: "scripts") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.015350 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8" (OuterVolumeSpecName: "kube-api-access-dxkl8") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "kube-api-access-dxkl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.028342 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.082882 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data" (OuterVolumeSpecName: "config-data") pod "2b1718ea-f44e-41f4-9229-80af33e66280" (UID: "2b1718ea-f44e-41f4-9229-80af33e66280"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.100227 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.100557 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.100884 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b1718ea-f44e-41f4-9229-80af33e66280-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.100982 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkl8\" (UniqueName: \"kubernetes.io/projected/2b1718ea-f44e-41f4-9229-80af33e66280-kube-api-access-dxkl8\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.101040 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.101092 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1718ea-f44e-41f4-9229-80af33e66280-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.129856 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.516615 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab161b00-9a11-424a-ac7e-4201c5f2159b" path="/var/lib/kubelet/pods/ab161b00-9a11-424a-ac7e-4201c5f2159b/volumes" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.571045 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-h48h7"] Nov 23 07:07:50 crc kubenswrapper[4988]: E1123 07:07:50.571499 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.571523 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api" Nov 23 07:07:50 crc kubenswrapper[4988]: E1123 07:07:50.571570 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api-log" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.571578 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api-log" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.571773 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api-log" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.571798 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" containerName="cinder-api" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.572679 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.589121 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h48h7"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.692563 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.694003 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2b1718ea-f44e-41f4-9229-80af33e66280","Type":"ContainerDied","Data":"920b4b2d8d48c9b2ce0ca37643b9c3859e14f76690d5ddd7a299371336bf9930"} Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.694040 4988 scope.go:117] "RemoveContainer" containerID="3adeffdd5d441664c518d04f180da9de8589fc93a41633d4cc3e6535f1bc0de0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.699847 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6bjf7"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.701889 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.706868 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-cb35-account-create-bcnxl"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.707962 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.708723 4988 generic.go:334] "Generic (PLEG): container finished" podID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerID="a9147a590705da748bcca443317184e83e47a36d614d35acb8aa413769488875" exitCode=0 Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.708804 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerDied","Data":"a9147a590705da748bcca443317184e83e47a36d614d35acb8aa413769488875"} Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.710650 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.713921 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x92lv\" (UniqueName: \"kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.714035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.717650 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6bjf7"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.720653 4988 generic.go:334] "Generic (PLEG): container finished" podID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerID="2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd" exitCode=143 Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.720707 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerDied","Data":"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd"} Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.722228 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerStarted","Data":"0b031daac2d1afde95bc255e5e294dc4818f16f8016c4ea92985b1ad4fcfff81"} Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.725034 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cb35-account-create-bcnxl"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.771948 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.784055 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.792406 4988 scope.go:117] "RemoveContainer" containerID="59b045f5ced0552d1b892cd62da608c62c1928d40258a314e61cdfc8af89b0fe" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.796779 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.798183 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.801125 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.801380 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.801408 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.805099 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.815814 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.815849 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.815891 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.815985 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9t2c\" (UniqueName: \"kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.816105 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x92lv\" (UniqueName: \"kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.816126 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hkr\" (UniqueName: \"kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.816815 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.850016 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x92lv\" (UniqueName: \"kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv\") pod \"nova-api-db-create-h48h7\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: E1123 07:07:50.864576 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b1718ea_f44e_41f4_9229_80af33e66280.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.889167 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918253 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918335 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918371 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9t2c\" (UniqueName: \"kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918413 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hkr\" (UniqueName: \"kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918434 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918477 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918498 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918521 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918546 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfjn\" (UniqueName: \"kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918565 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918584 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918604 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.918620 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.919680 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.920150 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.926350 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m6l8j"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.927769 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.933445 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5c20-account-create-ctwp2"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.934805 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.943410 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.949034 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m6l8j"] Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.949828 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9t2c\" (UniqueName: \"kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c\") pod \"nova-cell0-db-create-6bjf7\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.952834 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hkr\" (UniqueName: \"kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr\") pod \"nova-api-cb35-account-create-bcnxl\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:50 crc kubenswrapper[4988]: I1123 07:07:50.960941 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c20-account-create-ctwp2"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.032502 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.032687 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.032718 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nfjn\" (UniqueName: \"kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.032743 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.032765 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033026 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033071 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033099 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033119 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tjn\" (UniqueName: \"kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033141 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033184 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033227 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.033254 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxpb\" (UniqueName: \"kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.034831 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.037442 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.044039 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.045622 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.050894 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.051727 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.057560 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nfjn\" (UniqueName: \"kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.057241 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.068227 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts\") pod \"cinder-api-0\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.100644 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.112546 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.125261 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-98a8-account-create-79sxz"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.126492 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.131896 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.145381 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174280 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29xb\" (UniqueName: \"kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174340 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6tjn\" (UniqueName: \"kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174458 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174527 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.175237 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.174569 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxpb\" (UniqueName: \"kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.176298 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.188374 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-98a8-account-create-79sxz"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.207332 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxpb\" (UniqueName: \"kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb\") pod \"nova-cell1-db-create-m6l8j\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.208949 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6tjn\" (UniqueName: \"kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn\") pod \"nova-cell0-5c20-account-create-ctwp2\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.277907 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.278095 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29xb\" (UniqueName: \"kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.278786 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.287807 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.295771 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.300606 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29xb\" (UniqueName: \"kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb\") pod \"nova-cell1-98a8-account-create-79sxz\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.376353 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.381998 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.481823 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs\") pod \"0777df4e-bc8b-4260-a848-fe68de358bbe\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.481885 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config\") pod \"0777df4e-bc8b-4260-a848-fe68de358bbe\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.481911 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config\") pod \"0777df4e-bc8b-4260-a848-fe68de358bbe\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.481930 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle\") pod \"0777df4e-bc8b-4260-a848-fe68de358bbe\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.482091 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fkmb\" (UniqueName: \"kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb\") pod \"0777df4e-bc8b-4260-a848-fe68de358bbe\" (UID: \"0777df4e-bc8b-4260-a848-fe68de358bbe\") " Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.490580 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-h48h7"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.505680 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb" (OuterVolumeSpecName: "kube-api-access-2fkmb") pod "0777df4e-bc8b-4260-a848-fe68de358bbe" (UID: "0777df4e-bc8b-4260-a848-fe68de358bbe"). InnerVolumeSpecName "kube-api-access-2fkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.507930 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0777df4e-bc8b-4260-a848-fe68de358bbe" (UID: "0777df4e-bc8b-4260-a848-fe68de358bbe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:51 crc kubenswrapper[4988]: W1123 07:07:51.543869 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc92c0a02_48da_476e_869f_db5f076076d5.slice/crio-bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7 WatchSource:0}: Error finding container bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7: Status 404 returned error can't find the container with id bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7 Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.580587 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0777df4e-bc8b-4260-a848-fe68de358bbe" (UID: "0777df4e-bc8b-4260-a848-fe68de358bbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.584035 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fkmb\" (UniqueName: \"kubernetes.io/projected/0777df4e-bc8b-4260-a848-fe68de358bbe-kube-api-access-2fkmb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.584062 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.584072 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.624145 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config" (OuterVolumeSpecName: "config") pod "0777df4e-bc8b-4260-a848-fe68de358bbe" (UID: "0777df4e-bc8b-4260-a848-fe68de358bbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.639999 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0777df4e-bc8b-4260-a848-fe68de358bbe" (UID: "0777df4e-bc8b-4260-a848-fe68de358bbe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.676930 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.676972 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.687337 4988 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.687372 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0777df4e-bc8b-4260-a848-fe68de358bbe-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.753055 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-97bc9b8d4-j6jbm" event={"ID":"0777df4e-bc8b-4260-a848-fe68de358bbe","Type":"ContainerDied","Data":"7d3786cec333827d2dddbf2c60e55adbe174eca66ef6b70a381f513a20d5ec0a"} Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.753077 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-97bc9b8d4-j6jbm" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.753104 4988 scope.go:117] "RemoveContainer" containerID="c2342eb37abcdebf788ea07b47f06fb4035f9c416f9399acc7b408319740da2d" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.760124 4988 generic.go:334] "Generic (PLEG): container finished" podID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerID="b0af7e44f04e408944e7c1b2eef12ae141deb9974fd6480160b622ff9e9d9379" exitCode=0 Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.760378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerDied","Data":"b0af7e44f04e408944e7c1b2eef12ae141deb9974fd6480160b622ff9e9d9379"} Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.763476 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerStarted","Data":"5a2eb0e068859a3bd661ad08cefa9917f767b858b45c64db5006e3f0756a6e12"} Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.768161 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h48h7" event={"ID":"c92c0a02-48da-476e-869f-db5f076076d5","Type":"ContainerStarted","Data":"bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7"} Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.819685 4988 scope.go:117] "RemoveContainer" containerID="a9147a590705da748bcca443317184e83e47a36d614d35acb8aa413769488875" Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.858321 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.865577 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-97bc9b8d4-j6jbm"] Nov 23 07:07:51 crc kubenswrapper[4988]: I1123 07:07:51.872265 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6bjf7"] Nov 23 07:07:51 crc kubenswrapper[4988]: W1123 07:07:51.882717 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7997250c_3018_44d1_9c9a_ff245889d239.slice/crio-5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216 WatchSource:0}: Error finding container 5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216: Status 404 returned error can't find the container with id 5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216 Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.127783 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219692 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219749 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219807 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2rr\" (UniqueName: \"kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219823 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219849 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.219996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.220050 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.220094 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run\") pod \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\" (UID: \"dd1a9d5d-c267-4671-8a0a-498a24be0e25\") " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.220903 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.222410 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs" (OuterVolumeSpecName: "logs") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.231321 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.234298 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts" (OuterVolumeSpecName: "scripts") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.244883 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr" (OuterVolumeSpecName: "kube-api-access-vm2rr") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "kube-api-access-vm2rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.322685 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.322715 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.322727 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.322735 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd1a9d5d-c267-4671-8a0a-498a24be0e25-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.322746 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2rr\" (UniqueName: \"kubernetes.io/projected/dd1a9d5d-c267-4671-8a0a-498a24be0e25-kube-api-access-vm2rr\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.326339 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.362648 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.366979 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.386142 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5c20-account-create-ctwp2"] Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.426788 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.426826 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.426837 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.434297 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data" (OuterVolumeSpecName: "config-data") pod "dd1a9d5d-c267-4671-8a0a-498a24be0e25" (UID: "dd1a9d5d-c267-4671-8a0a-498a24be0e25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:52 crc kubenswrapper[4988]: W1123 07:07:52.450974 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa607400_86cc_43bd_ac9a_da02dc37dff7.slice/crio-20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d WatchSource:0}: Error finding container 20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d: Status 404 returned error can't find the container with id 20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.470015 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m6l8j"] Nov 23 07:07:52 crc kubenswrapper[4988]: W1123 07:07:52.478729 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd192035c_1590_4efb_af88_66e16c8afab7.slice/crio-a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db WatchSource:0}: Error finding container a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db: Status 404 returned error can't find the container with id a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.490965 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.491011 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-98a8-account-create-79sxz"] Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.522094 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" path="/var/lib/kubelet/pods/0777df4e-bc8b-4260-a848-fe68de358bbe/volumes" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.528438 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd1a9d5d-c267-4671-8a0a-498a24be0e25-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.535347 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b1718ea-f44e-41f4-9229-80af33e66280" path="/var/lib/kubelet/pods/2b1718ea-f44e-41f4-9229-80af33e66280/volumes" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.536805 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cb35-account-create-bcnxl"] Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.799604 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6bjf7" event={"ID":"7997250c-3018-44d1-9c9a-ff245889d239","Type":"ContainerStarted","Data":"dd7bc2fffd6371144890721b68d24739d43a19a086794cc91ba4e0f60f9016ab"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.799705 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6bjf7" event={"ID":"7997250c-3018-44d1-9c9a-ff245889d239","Type":"ContainerStarted","Data":"5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.836476 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerStarted","Data":"f2bcf89eb1f7712b654f2d3df8898479ed7056451954cf97154c49d5d2280882"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.837756 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6bjf7" podStartSLOduration=2.83772045 podStartE2EDuration="2.83772045s" podCreationTimestamp="2025-11-23 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:52.818498132 +0000 UTC m=+1325.127010895" watchObservedRunningTime="2025-11-23 07:07:52.83772045 +0000 UTC m=+1325.146233213" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.853532 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dd1a9d5d-c267-4671-8a0a-498a24be0e25","Type":"ContainerDied","Data":"52aed4cfc59f6a8265306578faf140117791e8010be7b8413384ddb112570573"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.853583 4988 scope.go:117] "RemoveContainer" containerID="b0af7e44f04e408944e7c1b2eef12ae141deb9974fd6480160b622ff9e9d9379" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.853683 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.858104 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb35-account-create-bcnxl" event={"ID":"e952929a-89c8-4084-836f-854260a97b3e","Type":"ContainerStarted","Data":"2a76468ed1853a2ddcb8d63ca8b8339711a03460f49d06afc9716955d116dada"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.868722 4988 generic.go:334] "Generic (PLEG): container finished" podID="c92c0a02-48da-476e-869f-db5f076076d5" containerID="fcdbbdc0a41476772de1268178d0e9f8aee3aaee637572f98bab9a0697ebd658" exitCode=0 Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.868785 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h48h7" event={"ID":"c92c0a02-48da-476e-869f-db5f076076d5","Type":"ContainerDied","Data":"fcdbbdc0a41476772de1268178d0e9f8aee3aaee637572f98bab9a0697ebd658"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.875358 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6l8j" event={"ID":"aa607400-86cc-43bd-ac9a-da02dc37dff7","Type":"ContainerStarted","Data":"20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.881544 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98a8-account-create-79sxz" event={"ID":"d192035c-1590-4efb-af88-66e16c8afab7","Type":"ContainerStarted","Data":"a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.893473 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c20-account-create-ctwp2" event={"ID":"170939b5-2d04-422e-a463-fa080622257b","Type":"ContainerStarted","Data":"9a5bf6ff96ba34f38da5739750ec41668dd6de0888baf3eeff91d58bdcfb8148"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.914073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerStarted","Data":"8a6a7d5c6bf5d7939edb53aafc0dc58a8e62b962aca584c8d7e99580b58c103c"} Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.949312 4988 scope.go:117] "RemoveContainer" containerID="b6e5acca2641d8e9a3eda7015fc4fdfb6bc3436733445b23cf69c1ae74733c20" Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.972102 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:52 crc kubenswrapper[4988]: I1123 07:07:52.983183 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020292 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:53 crc kubenswrapper[4988]: E1123 07:07:53.020698 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020714 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: E1123 07:07:53.020727 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-api" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020734 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-api" Nov 23 07:07:53 crc kubenswrapper[4988]: E1123 07:07:53.020748 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-log" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020754 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-log" Nov 23 07:07:53 crc kubenswrapper[4988]: E1123 07:07:53.020780 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020786 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020966 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.020997 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-httpd" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.021011 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" containerName="glance-log" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.021025 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0777df4e-bc8b-4260-a848-fe68de358bbe" containerName="neutron-api" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.021954 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.032535 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.032826 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.057373 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145396 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145448 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145474 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145512 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145541 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145588 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145622 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8zv\" (UniqueName: \"kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.145643 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247075 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247153 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247238 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247274 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247346 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw8zv\" (UniqueName: \"kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.247409 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.248392 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.248683 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.248985 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.249464 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.258791 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.258957 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.272001 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.272882 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw8zv\" (UniqueName: \"kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.275545 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.310394 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.384616 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.672227 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.861710 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862494 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862575 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862697 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hs5p\" (UniqueName: \"kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862815 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862872 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.862952 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.863030 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run\") pod \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\" (UID: \"b20a898b-0c5b-4b53-a550-246abf8f6d8a\") " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.864037 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.864560 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs" (OuterVolumeSpecName: "logs") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.868812 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.869869 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts" (OuterVolumeSpecName: "scripts") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.877866 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p" (OuterVolumeSpecName: "kube-api-access-5hs5p") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "kube-api-access-5hs5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.932176 4988 generic.go:334] "Generic (PLEG): container finished" podID="7997250c-3018-44d1-9c9a-ff245889d239" containerID="dd7bc2fffd6371144890721b68d24739d43a19a086794cc91ba4e0f60f9016ab" exitCode=0 Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.932559 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6bjf7" event={"ID":"7997250c-3018-44d1-9c9a-ff245889d239","Type":"ContainerDied","Data":"dd7bc2fffd6371144890721b68d24739d43a19a086794cc91ba4e0f60f9016ab"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.932732 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.936797 4988 generic.go:334] "Generic (PLEG): container finished" podID="aa607400-86cc-43bd-ac9a-da02dc37dff7" containerID="d9da8641474615fa974f048f1a86218e5baa107275dcf3f618a64368d2e3fd27" exitCode=0 Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.936906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6l8j" event={"ID":"aa607400-86cc-43bd-ac9a-da02dc37dff7","Type":"ContainerDied","Data":"d9da8641474615fa974f048f1a86218e5baa107275dcf3f618a64368d2e3fd27"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.939443 4988 generic.go:334] "Generic (PLEG): container finished" podID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerID="897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70" exitCode=0 Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.939504 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerDied","Data":"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.939528 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b20a898b-0c5b-4b53-a550-246abf8f6d8a","Type":"ContainerDied","Data":"0f20640f52809a161549efb74a91ca3c768dbb3c3934d7898499e04bcb7da5f1"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.939552 4988 scope.go:117] "RemoveContainer" containerID="897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.939720 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.951320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerStarted","Data":"eb0a070e415f8dfd7b6419bcfdf6522f498c84c900e7d00ee0bc2953b173f539"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.953167 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data" (OuterVolumeSpecName: "config-data") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968086 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hs5p\" (UniqueName: \"kubernetes.io/projected/b20a898b-0c5b-4b53-a550-246abf8f6d8a-kube-api-access-5hs5p\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968386 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968401 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968460 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968474 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968482 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.968490 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b20a898b-0c5b-4b53-a550-246abf8f6d8a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.975026 4988 generic.go:334] "Generic (PLEG): container finished" podID="e952929a-89c8-4084-836f-854260a97b3e" containerID="922da97bc4cd2da2f4e745eb80e83dad312f6fcac0caca62e021bf8c20c674e1" exitCode=0 Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.975270 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb35-account-create-bcnxl" event={"ID":"e952929a-89c8-4084-836f-854260a97b3e","Type":"ContainerDied","Data":"922da97bc4cd2da2f4e745eb80e83dad312f6fcac0caca62e021bf8c20c674e1"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.980305 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b20a898b-0c5b-4b53-a550-246abf8f6d8a" (UID: "b20a898b-0c5b-4b53-a550-246abf8f6d8a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.987441 4988 generic.go:334] "Generic (PLEG): container finished" podID="d192035c-1590-4efb-af88-66e16c8afab7" containerID="f1350fd41c641245697cbffd16b5a000c909ca100d07d3c04f67ef7da811ed50" exitCode=0 Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.987526 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98a8-account-create-79sxz" event={"ID":"d192035c-1590-4efb-af88-66e16c8afab7","Type":"ContainerDied","Data":"f1350fd41c641245697cbffd16b5a000c909ca100d07d3c04f67ef7da811ed50"} Nov 23 07:07:53 crc kubenswrapper[4988]: I1123 07:07:53.994587 4988 scope.go:117] "RemoveContainer" containerID="2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.009957 4988 generic.go:334] "Generic (PLEG): container finished" podID="170939b5-2d04-422e-a463-fa080622257b" containerID="91c89292a316155edc0bb33717dfbfd97559a61804e616f57f0be6815c8aa1f1" exitCode=0 Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.010274 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c20-account-create-ctwp2" event={"ID":"170939b5-2d04-422e-a463-fa080622257b","Type":"ContainerDied","Data":"91c89292a316155edc0bb33717dfbfd97559a61804e616f57f0be6815c8aa1f1"} Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.025921 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.027824 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerStarted","Data":"0528954f5da33c5e64f1f55abc59161faf590954a1985c469ea4c1f06355f574"} Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.039097 4988 scope.go:117] "RemoveContainer" containerID="897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70" Nov 23 07:07:54 crc kubenswrapper[4988]: E1123 07:07:54.039691 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70\": container with ID starting with 897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70 not found: ID does not exist" containerID="897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.039969 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70"} err="failed to get container status \"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70\": rpc error: code = NotFound desc = could not find container \"897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70\": container with ID starting with 897f48d9142102fa2ce113a5bdd6b1fba0e5aeabb440e014f8d8a552371b6c70 not found: ID does not exist" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.040009 4988 scope.go:117] "RemoveContainer" containerID="2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd" Nov 23 07:07:54 crc kubenswrapper[4988]: E1123 07:07:54.040412 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd\": container with ID starting with 2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd not found: ID does not exist" containerID="2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.040456 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd"} err="failed to get container status \"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd\": rpc error: code = NotFound desc = could not find container \"2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd\": container with ID starting with 2033384ca5fe9baf376cbae1cd4be8c168166552b5b954b19db2b719905afecd not found: ID does not exist" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.087275 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b20a898b-0c5b-4b53-a550-246abf8f6d8a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.087317 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.108622 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:07:54 crc kubenswrapper[4988]: W1123 07:07:54.119419 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd525929_59bb_4b7f_b3a4_12e2e4a03cd4.slice/crio-a279773d7734d864599f45d58494784423d95659d4008a3b776645c134609b1b WatchSource:0}: Error finding container a279773d7734d864599f45d58494784423d95659d4008a3b776645c134609b1b: Status 404 returned error can't find the container with id a279773d7734d864599f45d58494784423d95659d4008a3b776645c134609b1b Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.363252 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.368595 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.411213 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:54 crc kubenswrapper[4988]: E1123 07:07:54.411628 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-httpd" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.411640 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-httpd" Nov 23 07:07:54 crc kubenswrapper[4988]: E1123 07:07:54.411650 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-log" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.411656 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-log" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.411845 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-httpd" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.411862 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" containerName="glance-log" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.412903 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.424777 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.431567 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.460767 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509220 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brc9m\" (UniqueName: \"kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509293 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509328 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509352 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509377 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509410 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509446 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.509471 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.526915 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20a898b-0c5b-4b53-a550-246abf8f6d8a" path="/var/lib/kubelet/pods/b20a898b-0c5b-4b53-a550-246abf8f6d8a/volumes" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.528367 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1a9d5d-c267-4671-8a0a-498a24be0e25" path="/var/lib/kubelet/pods/dd1a9d5d-c267-4671-8a0a-498a24be0e25/volumes" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.573740 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.610897 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.610944 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.610976 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611011 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611036 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611082 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brc9m\" (UniqueName: \"kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611121 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611152 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.611517 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.626216 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.632978 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.634875 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.635160 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.636002 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.640095 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brc9m\" (UniqueName: \"kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.640485 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.654596 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.712081 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x92lv\" (UniqueName: \"kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv\") pod \"c92c0a02-48da-476e-869f-db5f076076d5\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.712651 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts\") pod \"c92c0a02-48da-476e-869f-db5f076076d5\" (UID: \"c92c0a02-48da-476e-869f-db5f076076d5\") " Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.713072 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c92c0a02-48da-476e-869f-db5f076076d5" (UID: "c92c0a02-48da-476e-869f-db5f076076d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.715128 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv" (OuterVolumeSpecName: "kube-api-access-x92lv") pod "c92c0a02-48da-476e-869f-db5f076076d5" (UID: "c92c0a02-48da-476e-869f-db5f076076d5"). InnerVolumeSpecName "kube-api-access-x92lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.814371 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c92c0a02-48da-476e-869f-db5f076076d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.814413 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x92lv\" (UniqueName: \"kubernetes.io/projected/c92c0a02-48da-476e-869f-db5f076076d5-kube-api-access-x92lv\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:54 crc kubenswrapper[4988]: I1123 07:07:54.850375 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.054950 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerStarted","Data":"70c5ffb584b9bbe5c2b209c28d137387ef7c662311797b190ca13786ca38138a"} Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.056319 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.058749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerStarted","Data":"a279773d7734d864599f45d58494784423d95659d4008a3b776645c134609b1b"} Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061184 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerStarted","Data":"4ee751f617a9771d3b3f23a049411060b1b77cee8c88e6c221b77c09d450c5de"} Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061366 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-central-agent" containerID="cri-o://5a2eb0e068859a3bd661ad08cefa9917f767b858b45c64db5006e3f0756a6e12" gracePeriod=30 Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061649 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061700 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="proxy-httpd" containerID="cri-o://4ee751f617a9771d3b3f23a049411060b1b77cee8c88e6c221b77c09d450c5de" gracePeriod=30 Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061753 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="sg-core" containerID="cri-o://eb0a070e415f8dfd7b6419bcfdf6522f498c84c900e7d00ee0bc2953b173f539" gracePeriod=30 Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.061800 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-notification-agent" containerID="cri-o://8a6a7d5c6bf5d7939edb53aafc0dc58a8e62b962aca584c8d7e99580b58c103c" gracePeriod=30 Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.068095 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-h48h7" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.108559 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-h48h7" event={"ID":"c92c0a02-48da-476e-869f-db5f076076d5","Type":"ContainerDied","Data":"bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7"} Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.108635 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf08b1c0ae7a23f32de82458e55ad905b0f1e25ab60a0ebdffb27c9925421a7" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.119229 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.119210126 podStartE2EDuration="5.119210126s" podCreationTimestamp="2025-11-23 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:55.109475584 +0000 UTC m=+1327.417988347" watchObservedRunningTime="2025-11-23 07:07:55.119210126 +0000 UTC m=+1327.427722889" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.135577 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.425458971 podStartE2EDuration="7.135560325s" podCreationTimestamp="2025-11-23 07:07:48 +0000 UTC" firstStartedPulling="2025-11-23 07:07:49.833028955 +0000 UTC m=+1322.141541718" lastFinishedPulling="2025-11-23 07:07:54.543130319 +0000 UTC m=+1326.851643072" observedRunningTime="2025-11-23 07:07:55.129980213 +0000 UTC m=+1327.438492986" watchObservedRunningTime="2025-11-23 07:07:55.135560325 +0000 UTC m=+1327.444073088" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.443030 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.583545 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.749927 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29xb\" (UniqueName: \"kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb\") pod \"d192035c-1590-4efb-af88-66e16c8afab7\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.750055 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts\") pod \"d192035c-1590-4efb-af88-66e16c8afab7\" (UID: \"d192035c-1590-4efb-af88-66e16c8afab7\") " Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.751041 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d192035c-1590-4efb-af88-66e16c8afab7" (UID: "d192035c-1590-4efb-af88-66e16c8afab7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.757493 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb" (OuterVolumeSpecName: "kube-api-access-h29xb") pod "d192035c-1590-4efb-af88-66e16c8afab7" (UID: "d192035c-1590-4efb-af88-66e16c8afab7"). InnerVolumeSpecName "kube-api-access-h29xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.852049 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29xb\" (UniqueName: \"kubernetes.io/projected/d192035c-1590-4efb-af88-66e16c8afab7-kube-api-access-h29xb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.852381 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d192035c-1590-4efb-af88-66e16c8afab7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.857065 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.868417 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.890840 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:55 crc kubenswrapper[4988]: I1123 07:07:55.921726 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055058 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxxpb\" (UniqueName: \"kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb\") pod \"aa607400-86cc-43bd-ac9a-da02dc37dff7\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055137 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts\") pod \"aa607400-86cc-43bd-ac9a-da02dc37dff7\" (UID: \"aa607400-86cc-43bd-ac9a-da02dc37dff7\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055258 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts\") pod \"170939b5-2d04-422e-a463-fa080622257b\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055403 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9t2c\" (UniqueName: \"kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c\") pod \"7997250c-3018-44d1-9c9a-ff245889d239\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055432 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts\") pod \"e952929a-89c8-4084-836f-854260a97b3e\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055464 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts\") pod \"7997250c-3018-44d1-9c9a-ff245889d239\" (UID: \"7997250c-3018-44d1-9c9a-ff245889d239\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055525 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hkr\" (UniqueName: \"kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr\") pod \"e952929a-89c8-4084-836f-854260a97b3e\" (UID: \"e952929a-89c8-4084-836f-854260a97b3e\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055564 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6tjn\" (UniqueName: \"kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn\") pod \"170939b5-2d04-422e-a463-fa080622257b\" (UID: \"170939b5-2d04-422e-a463-fa080622257b\") " Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.055877 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "170939b5-2d04-422e-a463-fa080622257b" (UID: "170939b5-2d04-422e-a463-fa080622257b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.056207 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/170939b5-2d04-422e-a463-fa080622257b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.056441 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7997250c-3018-44d1-9c9a-ff245889d239" (UID: "7997250c-3018-44d1-9c9a-ff245889d239"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.056668 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e952929a-89c8-4084-836f-854260a97b3e" (UID: "e952929a-89c8-4084-836f-854260a97b3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.058145 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa607400-86cc-43bd-ac9a-da02dc37dff7" (UID: "aa607400-86cc-43bd-ac9a-da02dc37dff7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.060437 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr" (OuterVolumeSpecName: "kube-api-access-84hkr") pod "e952929a-89c8-4084-836f-854260a97b3e" (UID: "e952929a-89c8-4084-836f-854260a97b3e"). InnerVolumeSpecName "kube-api-access-84hkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.060544 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c" (OuterVolumeSpecName: "kube-api-access-t9t2c") pod "7997250c-3018-44d1-9c9a-ff245889d239" (UID: "7997250c-3018-44d1-9c9a-ff245889d239"). InnerVolumeSpecName "kube-api-access-t9t2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.067840 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb" (OuterVolumeSpecName: "kube-api-access-lxxpb") pod "aa607400-86cc-43bd-ac9a-da02dc37dff7" (UID: "aa607400-86cc-43bd-ac9a-da02dc37dff7"). InnerVolumeSpecName "kube-api-access-lxxpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.070308 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn" (OuterVolumeSpecName: "kube-api-access-j6tjn") pod "170939b5-2d04-422e-a463-fa080622257b" (UID: "170939b5-2d04-422e-a463-fa080622257b"). InnerVolumeSpecName "kube-api-access-j6tjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.083569 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m6l8j" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.083609 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m6l8j" event={"ID":"aa607400-86cc-43bd-ac9a-da02dc37dff7","Type":"ContainerDied","Data":"20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.085613 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20866a7689315901573cf006e23e82ac049ebfb20fc0af59afd447b844f7176d" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.089863 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98a8-account-create-79sxz" event={"ID":"d192035c-1590-4efb-af88-66e16c8afab7","Type":"ContainerDied","Data":"a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.089914 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b16fee6b12acf5a741433f0a8515b3157025f818a7edd804421fb7a95bb3db" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.089985 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98a8-account-create-79sxz" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.096666 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5c20-account-create-ctwp2" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.097764 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5c20-account-create-ctwp2" event={"ID":"170939b5-2d04-422e-a463-fa080622257b","Type":"ContainerDied","Data":"9a5bf6ff96ba34f38da5739750ec41668dd6de0888baf3eeff91d58bdcfb8148"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.097797 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5bf6ff96ba34f38da5739750ec41668dd6de0888baf3eeff91d58bdcfb8148" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117200 4988 generic.go:334] "Generic (PLEG): container finished" podID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerID="4ee751f617a9771d3b3f23a049411060b1b77cee8c88e6c221b77c09d450c5de" exitCode=0 Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117234 4988 generic.go:334] "Generic (PLEG): container finished" podID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerID="eb0a070e415f8dfd7b6419bcfdf6522f498c84c900e7d00ee0bc2953b173f539" exitCode=2 Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117241 4988 generic.go:334] "Generic (PLEG): container finished" podID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerID="8a6a7d5c6bf5d7939edb53aafc0dc58a8e62b962aca584c8d7e99580b58c103c" exitCode=0 Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117326 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerDied","Data":"4ee751f617a9771d3b3f23a049411060b1b77cee8c88e6c221b77c09d450c5de"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117353 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerDied","Data":"eb0a070e415f8dfd7b6419bcfdf6522f498c84c900e7d00ee0bc2953b173f539"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.117363 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerDied","Data":"8a6a7d5c6bf5d7939edb53aafc0dc58a8e62b962aca584c8d7e99580b58c103c"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.122662 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerStarted","Data":"3efddc33cd9ce7c30bcaaa3df7dc5f188157ba378b301c9729e7ab2b1c6e2333"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.124583 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerStarted","Data":"6e39c9b2114a3aef3d14e3c92c17e738ef53fc8cd9081f4ce39a37150392fe0f"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.127449 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb35-account-create-bcnxl" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.127464 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb35-account-create-bcnxl" event={"ID":"e952929a-89c8-4084-836f-854260a97b3e","Type":"ContainerDied","Data":"2a76468ed1853a2ddcb8d63ca8b8339711a03460f49d06afc9716955d116dada"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.127492 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a76468ed1853a2ddcb8d63ca8b8339711a03460f49d06afc9716955d116dada" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.131560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6bjf7" event={"ID":"7997250c-3018-44d1-9c9a-ff245889d239","Type":"ContainerDied","Data":"5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216"} Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.131597 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6bjf7" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.131607 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8f2d614397954d94610e782b132561d859d3c42e7be72e8d2a5126c973d216" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.157940 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hkr\" (UniqueName: \"kubernetes.io/projected/e952929a-89c8-4084-836f-854260a97b3e-kube-api-access-84hkr\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.157985 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6tjn\" (UniqueName: \"kubernetes.io/projected/170939b5-2d04-422e-a463-fa080622257b-kube-api-access-j6tjn\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.157999 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxxpb\" (UniqueName: \"kubernetes.io/projected/aa607400-86cc-43bd-ac9a-da02dc37dff7-kube-api-access-lxxpb\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.158013 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa607400-86cc-43bd-ac9a-da02dc37dff7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.158027 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9t2c\" (UniqueName: \"kubernetes.io/projected/7997250c-3018-44d1-9c9a-ff245889d239-kube-api-access-t9t2c\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.158038 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e952929a-89c8-4084-836f-854260a97b3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:56 crc kubenswrapper[4988]: I1123 07:07:56.158049 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7997250c-3018-44d1-9c9a-ff245889d239-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:07:57 crc kubenswrapper[4988]: I1123 07:07:57.144770 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerStarted","Data":"ffc32302cd38e863bb6d6aaea86be25fa12696798a187ea26576d320a9c5ccd3"} Nov 23 07:07:57 crc kubenswrapper[4988]: I1123 07:07:57.145214 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerStarted","Data":"bc100c14d5a403c7cb084cb19a58629ec50c4006b31564afe6806ad8247c5c3d"} Nov 23 07:07:57 crc kubenswrapper[4988]: I1123 07:07:57.146760 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerStarted","Data":"482947669a01cf95a71f90baa22b06aeac92eb3bd55c440708e11d5e72e1f4ca"} Nov 23 07:07:57 crc kubenswrapper[4988]: I1123 07:07:57.172715 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.172697282 podStartE2EDuration="3.172697282s" podCreationTimestamp="2025-11-23 07:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:57.166375231 +0000 UTC m=+1329.474887994" watchObservedRunningTime="2025-11-23 07:07:57.172697282 +0000 UTC m=+1329.481210045" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.180813 4988 generic.go:334] "Generic (PLEG): container finished" podID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerID="5a2eb0e068859a3bd661ad08cefa9917f767b858b45c64db5006e3f0756a6e12" exitCode=0 Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.181009 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerDied","Data":"5a2eb0e068859a3bd661ad08cefa9917f767b858b45c64db5006e3f0756a6e12"} Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.449842 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.481451 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.481430337 podStartE2EDuration="8.481430337s" podCreationTimestamp="2025-11-23 07:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:57.195941225 +0000 UTC m=+1329.504453988" watchObservedRunningTime="2025-11-23 07:08:00.481430337 +0000 UTC m=+1332.789943110" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534049 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534292 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534330 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534384 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534409 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534427 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.534464 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sl7j\" (UniqueName: \"kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j\") pod \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\" (UID: \"89bff2cd-3a2c-4541-940a-626e7c5f4f54\") " Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.535275 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.535236 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.541080 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts" (OuterVolumeSpecName: "scripts") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.555552 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j" (OuterVolumeSpecName: "kube-api-access-9sl7j") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "kube-api-access-9sl7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.575906 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.621443 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636255 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636292 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636308 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636320 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89bff2cd-3a2c-4541-940a-626e7c5f4f54-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636334 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.636347 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sl7j\" (UniqueName: \"kubernetes.io/projected/89bff2cd-3a2c-4541-940a-626e7c5f4f54-kube-api-access-9sl7j\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.654069 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data" (OuterVolumeSpecName: "config-data") pod "89bff2cd-3a2c-4541-940a-626e7c5f4f54" (UID: "89bff2cd-3a2c-4541-940a-626e7c5f4f54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:00 crc kubenswrapper[4988]: I1123 07:08:00.738104 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bff2cd-3a2c-4541-940a-626e7c5f4f54-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.192708 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89bff2cd-3a2c-4541-940a-626e7c5f4f54","Type":"ContainerDied","Data":"0b031daac2d1afde95bc255e5e294dc4818f16f8016c4ea92985b1ad4fcfff81"} Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.192778 4988 scope.go:117] "RemoveContainer" containerID="4ee751f617a9771d3b3f23a049411060b1b77cee8c88e6c221b77c09d450c5de" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.192827 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.224593 4988 scope.go:117] "RemoveContainer" containerID="eb0a070e415f8dfd7b6419bcfdf6522f498c84c900e7d00ee0bc2953b173f539" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.247346 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.249714 4988 scope.go:117] "RemoveContainer" containerID="8a6a7d5c6bf5d7939edb53aafc0dc58a8e62b962aca584c8d7e99580b58c103c" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.294599 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.303182 4988 scope.go:117] "RemoveContainer" containerID="5a2eb0e068859a3bd661ad08cefa9917f767b858b45c64db5006e3f0756a6e12" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.318787 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319278 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92c0a02-48da-476e-869f-db5f076076d5" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319314 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92c0a02-48da-476e-869f-db5f076076d5" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319331 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e952929a-89c8-4084-836f-854260a97b3e" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319339 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e952929a-89c8-4084-836f-854260a97b3e" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319356 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170939b5-2d04-422e-a463-fa080622257b" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319363 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="170939b5-2d04-422e-a463-fa080622257b" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319387 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-notification-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319394 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-notification-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319407 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d192035c-1590-4efb-af88-66e16c8afab7" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319413 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d192035c-1590-4efb-af88-66e16c8afab7" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319426 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="proxy-httpd" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319434 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="proxy-httpd" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319454 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7997250c-3018-44d1-9c9a-ff245889d239" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319462 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7997250c-3018-44d1-9c9a-ff245889d239" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319474 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="sg-core" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319480 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="sg-core" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319495 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa607400-86cc-43bd-ac9a-da02dc37dff7" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319502 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa607400-86cc-43bd-ac9a-da02dc37dff7" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: E1123 07:08:01.319518 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-central-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319525 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-central-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319738 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e952929a-89c8-4084-836f-854260a97b3e" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319755 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7997250c-3018-44d1-9c9a-ff245889d239" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319773 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-notification-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319797 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d192035c-1590-4efb-af88-66e16c8afab7" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319809 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="ceilometer-central-agent" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319821 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="proxy-httpd" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319834 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c92c0a02-48da-476e-869f-db5f076076d5" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319848 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa607400-86cc-43bd-ac9a-da02dc37dff7" containerName="mariadb-database-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319859 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="170939b5-2d04-422e-a463-fa080622257b" containerName="mariadb-account-create" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.319869 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" containerName="sg-core" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.321968 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.328603 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.329577 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.342975 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.367274 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9z7ns"] Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.368796 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.373685 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-544pw" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.373944 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.374077 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.376025 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9z7ns"] Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449053 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449096 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449274 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449391 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78vg\" (UniqueName: \"kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449546 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449614 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449726 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449786 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449836 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m547k\" (UniqueName: \"kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.449918 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551230 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551299 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551374 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551420 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551485 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m547k\" (UniqueName: \"kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551508 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551561 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551582 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551603 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.551657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78vg\" (UniqueName: \"kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.554238 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.554658 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.557810 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.557857 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.562689 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.562802 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.562906 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.563845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.570238 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.576236 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78vg\" (UniqueName: \"kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg\") pod \"ceilometer-0\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.576752 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m547k\" (UniqueName: \"kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k\") pod \"nova-cell0-conductor-db-sync-9z7ns\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.647706 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:01 crc kubenswrapper[4988]: I1123 07:08:01.696683 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:02 crc kubenswrapper[4988]: I1123 07:08:02.133350 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:02 crc kubenswrapper[4988]: W1123 07:08:02.141528 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2e61067_217d_4d40_8074_77774a895624.slice/crio-a6180a124013c98220f7e79d0b800dcab2db095debb393510c5dba9120c79f75 WatchSource:0}: Error finding container a6180a124013c98220f7e79d0b800dcab2db095debb393510c5dba9120c79f75: Status 404 returned error can't find the container with id a6180a124013c98220f7e79d0b800dcab2db095debb393510c5dba9120c79f75 Nov 23 07:08:02 crc kubenswrapper[4988]: W1123 07:08:02.209547 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8539c348_f366_4d11_862b_a645eaaf4a40.slice/crio-d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc WatchSource:0}: Error finding container d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc: Status 404 returned error can't find the container with id d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc Nov 23 07:08:02 crc kubenswrapper[4988]: I1123 07:08:02.209824 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerStarted","Data":"a6180a124013c98220f7e79d0b800dcab2db095debb393510c5dba9120c79f75"} Nov 23 07:08:02 crc kubenswrapper[4988]: I1123 07:08:02.212267 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9z7ns"] Nov 23 07:08:02 crc kubenswrapper[4988]: I1123 07:08:02.506935 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89bff2cd-3a2c-4541-940a-626e7c5f4f54" path="/var/lib/kubelet/pods/89bff2cd-3a2c-4541-940a-626e7c5f4f54/volumes" Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.222110 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" event={"ID":"8539c348-f366-4d11-862b-a645eaaf4a40","Type":"ContainerStarted","Data":"d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc"} Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.224612 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerStarted","Data":"5557a4c554318e5bbb9e2e377f18ba88f2bdee6bc7dfba3220348979f1738168"} Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.256368 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.385094 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.385135 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.424547 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:08:03 crc kubenswrapper[4988]: I1123 07:08:03.434028 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.244795 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerStarted","Data":"a95b1a5aa614587332d01ca2e661f70ed909880355e1ba25b999c226dfadef34"} Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.245654 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.245807 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.850746 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.851021 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.885756 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:04 crc kubenswrapper[4988]: I1123 07:08:04.900311 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:05 crc kubenswrapper[4988]: I1123 07:08:05.254781 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerStarted","Data":"728161b38b950684924d8142009fd5c3b72f7f24abdb1a3fa6e6595495d742e7"} Nov 23 07:08:05 crc kubenswrapper[4988]: I1123 07:08:05.255277 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:05 crc kubenswrapper[4988]: I1123 07:08:05.255313 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:06 crc kubenswrapper[4988]: I1123 07:08:06.263257 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:08:06 crc kubenswrapper[4988]: I1123 07:08:06.263642 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:08:06 crc kubenswrapper[4988]: I1123 07:08:06.418173 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:08:06 crc kubenswrapper[4988]: I1123 07:08:06.423586 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:08:07 crc kubenswrapper[4988]: I1123 07:08:07.244794 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:07 crc kubenswrapper[4988]: I1123 07:08:07.248308 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:08:11 crc kubenswrapper[4988]: I1123 07:08:11.313348 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" event={"ID":"8539c348-f366-4d11-862b-a645eaaf4a40","Type":"ContainerStarted","Data":"006fffb78e949582925d310e274095954e034e6bdeb41702e176c21e72464298"} Nov 23 07:08:11 crc kubenswrapper[4988]: I1123 07:08:11.316050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerStarted","Data":"a7101423e064ff71a75751892cf30cd72743db81751fe2d2f9bcf6b138b834d2"} Nov 23 07:08:11 crc kubenswrapper[4988]: I1123 07:08:11.316776 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:08:11 crc kubenswrapper[4988]: I1123 07:08:11.343250 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" podStartSLOduration=2.453762822 podStartE2EDuration="10.34322336s" podCreationTimestamp="2025-11-23 07:08:01 +0000 UTC" firstStartedPulling="2025-11-23 07:08:02.211897842 +0000 UTC m=+1334.520410605" lastFinishedPulling="2025-11-23 07:08:10.10135838 +0000 UTC m=+1342.409871143" observedRunningTime="2025-11-23 07:08:11.340769062 +0000 UTC m=+1343.649281835" watchObservedRunningTime="2025-11-23 07:08:11.34322336 +0000 UTC m=+1343.651736143" Nov 23 07:08:11 crc kubenswrapper[4988]: I1123 07:08:11.369947 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.416597907 podStartE2EDuration="10.369926886s" podCreationTimestamp="2025-11-23 07:08:01 +0000 UTC" firstStartedPulling="2025-11-23 07:08:02.143316379 +0000 UTC m=+1334.451829142" lastFinishedPulling="2025-11-23 07:08:10.096645368 +0000 UTC m=+1342.405158121" observedRunningTime="2025-11-23 07:08:11.362902129 +0000 UTC m=+1343.671414892" watchObservedRunningTime="2025-11-23 07:08:11.369926886 +0000 UTC m=+1343.678439649" Nov 23 07:08:21 crc kubenswrapper[4988]: I1123 07:08:21.673447 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:08:21 crc kubenswrapper[4988]: I1123 07:08:21.674000 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:08:21 crc kubenswrapper[4988]: I1123 07:08:21.674051 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:08:21 crc kubenswrapper[4988]: I1123 07:08:21.674789 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:08:21 crc kubenswrapper[4988]: I1123 07:08:21.674892 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9" gracePeriod=600 Nov 23 07:08:22 crc kubenswrapper[4988]: I1123 07:08:22.439254 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9" exitCode=0 Nov 23 07:08:22 crc kubenswrapper[4988]: I1123 07:08:22.439581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9"} Nov 23 07:08:22 crc kubenswrapper[4988]: I1123 07:08:22.439867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867"} Nov 23 07:08:22 crc kubenswrapper[4988]: I1123 07:08:22.439913 4988 scope.go:117] "RemoveContainer" containerID="c58a12ac5dbe2a21fab60b7320f73b8a7940a58137541c4513bbaa568ab1edb1" Nov 23 07:08:29 crc kubenswrapper[4988]: I1123 07:08:29.527542 4988 generic.go:334] "Generic (PLEG): container finished" podID="8539c348-f366-4d11-862b-a645eaaf4a40" containerID="006fffb78e949582925d310e274095954e034e6bdeb41702e176c21e72464298" exitCode=0 Nov 23 07:08:29 crc kubenswrapper[4988]: I1123 07:08:29.527664 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" event={"ID":"8539c348-f366-4d11-862b-a645eaaf4a40","Type":"ContainerDied","Data":"006fffb78e949582925d310e274095954e034e6bdeb41702e176c21e72464298"} Nov 23 07:08:30 crc kubenswrapper[4988]: I1123 07:08:30.940628 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.033466 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts\") pod \"8539c348-f366-4d11-862b-a645eaaf4a40\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.033905 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle\") pod \"8539c348-f366-4d11-862b-a645eaaf4a40\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.033986 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data\") pod \"8539c348-f366-4d11-862b-a645eaaf4a40\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.034032 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m547k\" (UniqueName: \"kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k\") pod \"8539c348-f366-4d11-862b-a645eaaf4a40\" (UID: \"8539c348-f366-4d11-862b-a645eaaf4a40\") " Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.554279 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" event={"ID":"8539c348-f366-4d11-862b-a645eaaf4a40","Type":"ContainerDied","Data":"d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc"} Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.554327 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9z7ns" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.554338 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b7122732d3023cd33a7101645e865e35d3e2bbfb25ca341a44fbff61a1c8fc" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.748438 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts" (OuterVolumeSpecName: "scripts") pod "8539c348-f366-4d11-862b-a645eaaf4a40" (UID: "8539c348-f366-4d11-862b-a645eaaf4a40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.748488 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k" (OuterVolumeSpecName: "kube-api-access-m547k") pod "8539c348-f366-4d11-862b-a645eaaf4a40" (UID: "8539c348-f366-4d11-862b-a645eaaf4a40"). InnerVolumeSpecName "kube-api-access-m547k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.753561 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8539c348-f366-4d11-862b-a645eaaf4a40" (UID: "8539c348-f366-4d11-862b-a645eaaf4a40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.755089 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.757723 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data" (OuterVolumeSpecName: "config-data") pod "8539c348-f366-4d11-862b-a645eaaf4a40" (UID: "8539c348-f366-4d11-862b-a645eaaf4a40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.849669 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.849707 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.849722 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8539c348-f366-4d11-862b-a645eaaf4a40-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.849736 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m547k\" (UniqueName: \"kubernetes.io/projected/8539c348-f366-4d11-862b-a645eaaf4a40-kube-api-access-m547k\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.856778 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:08:31 crc kubenswrapper[4988]: E1123 07:08:31.857208 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8539c348-f366-4d11-862b-a645eaaf4a40" containerName="nova-cell0-conductor-db-sync" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.857222 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8539c348-f366-4d11-862b-a645eaaf4a40" containerName="nova-cell0-conductor-db-sync" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.857425 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8539c348-f366-4d11-862b-a645eaaf4a40" containerName="nova-cell0-conductor-db-sync" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.858006 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.861357 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-544pw" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.861815 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:08:31 crc kubenswrapper[4988]: I1123 07:08:31.867102 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:08:32 crc kubenswrapper[4988]: E1123 07:08:32.023306 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8539c348_f366_4d11_862b_a645eaaf4a40.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.055469 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.056158 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.056217 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c54xl\" (UniqueName: \"kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.158955 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.159065 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.159154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c54xl\" (UniqueName: \"kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.163145 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.164848 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.177512 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c54xl\" (UniqueName: \"kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl\") pod \"nova-cell0-conductor-0\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.223520 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:32 crc kubenswrapper[4988]: I1123 07:08:32.716257 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:08:33 crc kubenswrapper[4988]: I1123 07:08:33.585746 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72","Type":"ContainerStarted","Data":"7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7"} Nov 23 07:08:33 crc kubenswrapper[4988]: I1123 07:08:33.586020 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72","Type":"ContainerStarted","Data":"b9068e4ca5525b90c63f7170c02e1c06cb197a62a113bf58483488a37d80734f"} Nov 23 07:08:33 crc kubenswrapper[4988]: I1123 07:08:33.586213 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:33 crc kubenswrapper[4988]: I1123 07:08:33.622151 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.62212975 podStartE2EDuration="2.62212975s" podCreationTimestamp="2025-11-23 07:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:33.607625972 +0000 UTC m=+1365.916138735" watchObservedRunningTime="2025-11-23 07:08:33.62212975 +0000 UTC m=+1365.930642513" Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.350424 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.350939 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" containerName="kube-state-metrics" containerID="cri-o://f30db7695834aef0fb068006041a6bec320ddcc9a460440affee5b182c0e4bba" gracePeriod=30 Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.615231 4988 generic.go:334] "Generic (PLEG): container finished" podID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" containerID="f30db7695834aef0fb068006041a6bec320ddcc9a460440affee5b182c0e4bba" exitCode=2 Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.615281 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7e21c84c-8c43-417f-b4ec-90ce1f19594d","Type":"ContainerDied","Data":"f30db7695834aef0fb068006041a6bec320ddcc9a460440affee5b182c0e4bba"} Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.804567 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.933814 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s84vh\" (UniqueName: \"kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh\") pod \"7e21c84c-8c43-417f-b4ec-90ce1f19594d\" (UID: \"7e21c84c-8c43-417f-b4ec-90ce1f19594d\") " Nov 23 07:08:35 crc kubenswrapper[4988]: I1123 07:08:35.942390 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh" (OuterVolumeSpecName: "kube-api-access-s84vh") pod "7e21c84c-8c43-417f-b4ec-90ce1f19594d" (UID: "7e21c84c-8c43-417f-b4ec-90ce1f19594d"). InnerVolumeSpecName "kube-api-access-s84vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.036681 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s84vh\" (UniqueName: \"kubernetes.io/projected/7e21c84c-8c43-417f-b4ec-90ce1f19594d-kube-api-access-s84vh\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.626948 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7e21c84c-8c43-417f-b4ec-90ce1f19594d","Type":"ContainerDied","Data":"48058125d20fb72088aabfa9d5c68c0c6ae8c5112add6d668425c133a1c63db9"} Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.626999 4988 scope.go:117] "RemoveContainer" containerID="f30db7695834aef0fb068006041a6bec320ddcc9a460440affee5b182c0e4bba" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.627104 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.653641 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.662422 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.676772 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:36 crc kubenswrapper[4988]: E1123 07:08:36.677719 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" containerName="kube-state-metrics" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.677878 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" containerName="kube-state-metrics" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.678221 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" containerName="kube-state-metrics" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.678893 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.681984 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.683110 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.687785 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.853854 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.853938 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.854001 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.854122 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xsp\" (UniqueName: \"kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.956433 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xsp\" (UniqueName: \"kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.956576 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.956605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.956638 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.961300 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.962078 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.964434 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:36 crc kubenswrapper[4988]: I1123 07:08:36.984628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xsp\" (UniqueName: \"kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp\") pod \"kube-state-metrics-0\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.001812 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.201642 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.201900 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-central-agent" containerID="cri-o://5557a4c554318e5bbb9e2e377f18ba88f2bdee6bc7dfba3220348979f1738168" gracePeriod=30 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.202128 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="sg-core" containerID="cri-o://728161b38b950684924d8142009fd5c3b72f7f24abdb1a3fa6e6595495d742e7" gracePeriod=30 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.202291 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="proxy-httpd" containerID="cri-o://a7101423e064ff71a75751892cf30cd72743db81751fe2d2f9bcf6b138b834d2" gracePeriod=30 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.202337 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-notification-agent" containerID="cri-o://a95b1a5aa614587332d01ca2e661f70ed909880355e1ba25b999c226dfadef34" gracePeriod=30 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.451279 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:37 crc kubenswrapper[4988]: W1123 07:08:37.462690 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c60d96_5836_4df5_8fa0_8e7ce2b6d1e3.slice/crio-71726a59e9e1609b07f6ee83488e520e1d596d9ed0079c42977f852e80b853e4 WatchSource:0}: Error finding container 71726a59e9e1609b07f6ee83488e520e1d596d9ed0079c42977f852e80b853e4: Status 404 returned error can't find the container with id 71726a59e9e1609b07f6ee83488e520e1d596d9ed0079c42977f852e80b853e4 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.467085 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.637369 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3","Type":"ContainerStarted","Data":"71726a59e9e1609b07f6ee83488e520e1d596d9ed0079c42977f852e80b853e4"} Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643125 4988 generic.go:334] "Generic (PLEG): container finished" podID="d2e61067-217d-4d40-8074-77774a895624" containerID="a7101423e064ff71a75751892cf30cd72743db81751fe2d2f9bcf6b138b834d2" exitCode=0 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643163 4988 generic.go:334] "Generic (PLEG): container finished" podID="d2e61067-217d-4d40-8074-77774a895624" containerID="728161b38b950684924d8142009fd5c3b72f7f24abdb1a3fa6e6595495d742e7" exitCode=2 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643174 4988 generic.go:334] "Generic (PLEG): container finished" podID="d2e61067-217d-4d40-8074-77774a895624" containerID="5557a4c554318e5bbb9e2e377f18ba88f2bdee6bc7dfba3220348979f1738168" exitCode=0 Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643216 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerDied","Data":"a7101423e064ff71a75751892cf30cd72743db81751fe2d2f9bcf6b138b834d2"} Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643251 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerDied","Data":"728161b38b950684924d8142009fd5c3b72f7f24abdb1a3fa6e6595495d742e7"} Nov 23 07:08:37 crc kubenswrapper[4988]: I1123 07:08:37.643265 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerDied","Data":"5557a4c554318e5bbb9e2e377f18ba88f2bdee6bc7dfba3220348979f1738168"} Nov 23 07:08:38 crc kubenswrapper[4988]: I1123 07:08:38.511530 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e21c84c-8c43-417f-b4ec-90ce1f19594d" path="/var/lib/kubelet/pods/7e21c84c-8c43-417f-b4ec-90ce1f19594d/volumes" Nov 23 07:08:38 crc kubenswrapper[4988]: I1123 07:08:38.654760 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3","Type":"ContainerStarted","Data":"5ebe52496c238117a9598c27a6bbc10f3777cd5ad280a8dd4625534d34f3fa75"} Nov 23 07:08:38 crc kubenswrapper[4988]: I1123 07:08:38.654995 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 07:08:38 crc kubenswrapper[4988]: I1123 07:08:38.678818 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.333234289 podStartE2EDuration="2.678799897s" podCreationTimestamp="2025-11-23 07:08:36 +0000 UTC" firstStartedPulling="2025-11-23 07:08:37.46687119 +0000 UTC m=+1369.775383953" lastFinishedPulling="2025-11-23 07:08:37.812436798 +0000 UTC m=+1370.120949561" observedRunningTime="2025-11-23 07:08:38.67644476 +0000 UTC m=+1370.984957573" watchObservedRunningTime="2025-11-23 07:08:38.678799897 +0000 UTC m=+1370.987312660" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.678763 4988 generic.go:334] "Generic (PLEG): container finished" podID="d2e61067-217d-4d40-8074-77774a895624" containerID="a95b1a5aa614587332d01ca2e661f70ed909880355e1ba25b999c226dfadef34" exitCode=0 Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.678874 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerDied","Data":"a95b1a5aa614587332d01ca2e661f70ed909880355e1ba25b999c226dfadef34"} Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.817849 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.930766 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.930846 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931237 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931637 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931679 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931704 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931725 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t78vg\" (UniqueName: \"kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.931766 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml\") pod \"d2e61067-217d-4d40-8074-77774a895624\" (UID: \"d2e61067-217d-4d40-8074-77774a895624\") " Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.932090 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.932102 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.939987 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts" (OuterVolumeSpecName: "scripts") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.956582 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg" (OuterVolumeSpecName: "kube-api-access-t78vg") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "kube-api-access-t78vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:40 crc kubenswrapper[4988]: I1123 07:08:40.967579 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.010271 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.027585 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data" (OuterVolumeSpecName: "config-data") pod "d2e61067-217d-4d40-8074-77774a895624" (UID: "d2e61067-217d-4d40-8074-77774a895624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033831 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033872 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033881 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033891 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2e61067-217d-4d40-8074-77774a895624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033903 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2e61067-217d-4d40-8074-77774a895624-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.033912 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t78vg\" (UniqueName: \"kubernetes.io/projected/d2e61067-217d-4d40-8074-77774a895624-kube-api-access-t78vg\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.699880 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2e61067-217d-4d40-8074-77774a895624","Type":"ContainerDied","Data":"a6180a124013c98220f7e79d0b800dcab2db095debb393510c5dba9120c79f75"} Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.699927 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.699943 4988 scope.go:117] "RemoveContainer" containerID="a7101423e064ff71a75751892cf30cd72743db81751fe2d2f9bcf6b138b834d2" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.728580 4988 scope.go:117] "RemoveContainer" containerID="728161b38b950684924d8142009fd5c3b72f7f24abdb1a3fa6e6595495d742e7" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.748461 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.756686 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.756819 4988 scope.go:117] "RemoveContainer" containerID="a95b1a5aa614587332d01ca2e661f70ed909880355e1ba25b999c226dfadef34" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.793712 4988 scope.go:117] "RemoveContainer" containerID="5557a4c554318e5bbb9e2e377f18ba88f2bdee6bc7dfba3220348979f1738168" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.809245 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:41 crc kubenswrapper[4988]: E1123 07:08:41.810004 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-notification-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810019 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-notification-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: E1123 07:08:41.810038 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-central-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810044 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-central-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: E1123 07:08:41.810056 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="proxy-httpd" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810062 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="proxy-httpd" Nov 23 07:08:41 crc kubenswrapper[4988]: E1123 07:08:41.810106 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="sg-core" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810112 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="sg-core" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810421 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="sg-core" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810438 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="proxy-httpd" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810452 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-central-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.810472 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2e61067-217d-4d40-8074-77774a895624" containerName="ceilometer-notification-agent" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.813356 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.813458 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.826129 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.826212 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.826254 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.949827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.949945 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khfjm\" (UniqueName: \"kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.949982 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.950015 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.950053 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.950074 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.950109 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:41 crc kubenswrapper[4988]: I1123 07:08:41.950133 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051297 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khfjm\" (UniqueName: \"kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051365 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051411 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051438 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051463 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.051980 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.052050 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.056082 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.056296 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.056332 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.056569 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.069281 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.078809 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khfjm\" (UniqueName: \"kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm\") pod \"ceilometer-0\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.148254 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.279322 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.505584 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e61067-217d-4d40-8074-77774a895624" path="/var/lib/kubelet/pods/d2e61067-217d-4d40-8074-77774a895624/volumes" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.624513 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.718895 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerStarted","Data":"7b6cee0185c0c6937f3acee3898c2d5792cd813a58a44e1fe3bfdec57f471c91"} Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.780104 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-f8vvb"] Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.781661 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.785948 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.788021 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.794705 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f8vvb"] Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.962107 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.963492 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.967482 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.977511 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7rg\" (UniqueName: \"kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.977777 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.977837 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.977939 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.981340 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.982523 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.986350 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:08:42 crc kubenswrapper[4988]: I1123 07:08:42.993312 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.015814 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082280 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082341 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082362 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082379 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082411 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7rg\" (UniqueName: \"kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082448 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082482 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082506 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082533 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082548 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zk5\" (UniqueName: \"kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.082572 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqsfb\" (UniqueName: \"kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.089898 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.092671 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.097497 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.105517 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.105550 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.108642 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.109242 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7rg\" (UniqueName: \"kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg\") pod \"nova-cell0-cell-mapping-f8vvb\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.111755 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.160653 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.164025 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.165488 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.167792 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrb22\" (UniqueName: \"kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183788 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183831 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d82pp\" (UniqueName: \"kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183858 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183873 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zk5\" (UniqueName: \"kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183887 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183909 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqsfb\" (UniqueName: \"kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183946 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183972 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.183991 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.184006 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.184021 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.184039 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.187872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.202679 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.210730 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.211292 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.216946 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.219440 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.232604 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zk5\" (UniqueName: \"kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5\") pod \"nova-scheduler-0\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.234530 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqsfb\" (UniqueName: \"kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb\") pod \"nova-api-0\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290274 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290384 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290510 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290534 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290638 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrb22\" (UniqueName: \"kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290770 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290816 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d82pp\" (UniqueName: \"kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.290880 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.291437 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.291931 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.304759 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.306855 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.307287 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.309310 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.310785 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.316860 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.326957 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrb22\" (UniqueName: \"kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22\") pod \"nova-cell1-novncproxy-0\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.329073 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.374610 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d82pp\" (UniqueName: \"kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp\") pod \"nova-metadata-0\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.390719 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408288 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408396 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408432 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408476 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408496 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999wd\" (UniqueName: \"kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.408624 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.446045 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.532480 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.532999 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.533059 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.533077 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999wd\" (UniqueName: \"kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.533172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.533356 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.539951 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.541642 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.542155 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.543117 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.548087 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.564141 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999wd\" (UniqueName: \"kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd\") pod \"dnsmasq-dns-5dd7c4987f-8c2sj\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.859037 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:43 crc kubenswrapper[4988]: I1123 07:08:43.953515 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-f8vvb"] Nov 23 07:08:43 crc kubenswrapper[4988]: W1123 07:08:43.987062 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0965fd34_7f35_496d_82c2_ad7a4cfb0d63.slice/crio-a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759 WatchSource:0}: Error finding container a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759: Status 404 returned error can't find the container with id a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759 Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.197434 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.219579 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.346710 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zvkhn"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.348306 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.351307 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.353256 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.374627 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zvkhn"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.426669 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.448532 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.451586 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.451715 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.451749 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gzh\" (UniqueName: \"kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.451835 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.554078 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.554170 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.554274 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.554297 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47gzh\" (UniqueName: \"kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.558609 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.562082 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.563949 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.576448 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47gzh\" (UniqueName: \"kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh\") pod \"nova-cell1-conductor-db-sync-zvkhn\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.592531 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.686083 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.777980 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"92cb41fd-bac8-4ff0-a05e-c2eed4a08830","Type":"ContainerStarted","Data":"c1acbc5819e8ef46a9f62485b0d91c6b5392038aa93ce4fd69b34d98f50d3cb7"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.779087 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" event={"ID":"f839d139-cc2b-46cb-b100-d48211ad463c","Type":"ContainerStarted","Data":"793de2d3c48d4cc2c7ac2c4bcc3cb7dca4e85d25d700d5000313e827ea75eef5"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.781845 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f8vvb" event={"ID":"0965fd34-7f35-496d-82c2-ad7a4cfb0d63","Type":"ContainerStarted","Data":"618d25672c5c0f17710274816f0723200e9f520a3065740bac8588506051f661"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.781887 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f8vvb" event={"ID":"0965fd34-7f35-496d-82c2-ad7a4cfb0d63","Type":"ContainerStarted","Data":"a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.786804 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerStarted","Data":"bd252801de8f640604a6b316dea79e4daed9673582251b35e37975cd36ec0a5d"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.795855 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cc76871-98cf-4a82-b4ea-868c11bf18ca","Type":"ContainerStarted","Data":"e7460c407560fb2abe7dc4049ff2cbf252624d5399bf3d65c727b6d4dfaf5a73"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.807609 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-f8vvb" podStartSLOduration=2.8075896670000002 podStartE2EDuration="2.807589667s" podCreationTimestamp="2025-11-23 07:08:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:44.799653706 +0000 UTC m=+1377.108166469" watchObservedRunningTime="2025-11-23 07:08:44.807589667 +0000 UTC m=+1377.116102430" Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.809968 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerStarted","Data":"0f0acdc9be3408df562665e966969ead03650d38954a8226ee70be545f8794a7"} Nov 23 07:08:44 crc kubenswrapper[4988]: I1123 07:08:44.812931 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerStarted","Data":"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.228466 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zvkhn"] Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.826258 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerStarted","Data":"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.826612 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerStarted","Data":"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.828377 4988 generic.go:334] "Generic (PLEG): container finished" podID="f839d139-cc2b-46cb-b100-d48211ad463c" containerID="fc8efdc69fbb254927b268c08bfc58c1060cf3383c4b15843278cf21b966b810" exitCode=0 Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.828466 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" event={"ID":"f839d139-cc2b-46cb-b100-d48211ad463c","Type":"ContainerDied","Data":"fc8efdc69fbb254927b268c08bfc58c1060cf3383c4b15843278cf21b966b810"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.837635 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" event={"ID":"a1ba06f4-14ba-421e-85ab-f9a593f7c60c","Type":"ContainerStarted","Data":"4e6fe2366cf0433936682d32ab254792a68eff37687d4d89181ec0d51fed8967"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.837895 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" event={"ID":"a1ba06f4-14ba-421e-85ab-f9a593f7c60c","Type":"ContainerStarted","Data":"9c8ff6d4edb70f9338857506d551379627b42a4a291e6426160b094e679a83e4"} Nov 23 07:08:45 crc kubenswrapper[4988]: I1123 07:08:45.876661 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" podStartSLOduration=1.876639857 podStartE2EDuration="1.876639857s" podCreationTimestamp="2025-11-23 07:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:45.868727207 +0000 UTC m=+1378.177239980" watchObservedRunningTime="2025-11-23 07:08:45.876639857 +0000 UTC m=+1378.185152620" Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.022605 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.352656 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.362701 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.864577 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" event={"ID":"f839d139-cc2b-46cb-b100-d48211ad463c","Type":"ContainerStarted","Data":"35fd0ed055ab1700c170fa7a1882e7bdaba65f13e3de0fe4b4b573c72249cd4d"} Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.864736 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:47 crc kubenswrapper[4988]: I1123 07:08:47.885971 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" podStartSLOduration=4.8859526760000005 podStartE2EDuration="4.885952676s" podCreationTimestamp="2025-11-23 07:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:47.879477751 +0000 UTC m=+1380.187990514" watchObservedRunningTime="2025-11-23 07:08:47.885952676 +0000 UTC m=+1380.194465439" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.171539 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.174804 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.198369 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.355893 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87cb\" (UniqueName: \"kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.355951 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.356025 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.457522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87cb\" (UniqueName: \"kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.457844 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.458023 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.458386 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.458418 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.478875 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87cb\" (UniqueName: \"kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb\") pod \"redhat-operators-brwzb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:49 crc kubenswrapper[4988]: I1123 07:08:49.494674 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.354177 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.964812 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cc76871-98cf-4a82-b4ea-868c11bf18ca","Type":"ContainerStarted","Data":"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.966530 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerStarted","Data":"8e3c9d2fec29812fc7ace8e08e11d94ae904a84c512878a8526091d5bb35f819"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.966654 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerStarted","Data":"63f613a721d537ea4a6ffd787b180d4d5ff30b9232cb7ac04da6abb7c20801b3"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.966634 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-log" containerID="cri-o://63f613a721d537ea4a6ffd787b180d4d5ff30b9232cb7ac04da6abb7c20801b3" gracePeriod=30 Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.966686 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-metadata" containerID="cri-o://8e3c9d2fec29812fc7ace8e08e11d94ae904a84c512878a8526091d5bb35f819" gracePeriod=30 Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.968690 4988 generic.go:334] "Generic (PLEG): container finished" podID="0965fd34-7f35-496d-82c2-ad7a4cfb0d63" containerID="618d25672c5c0f17710274816f0723200e9f520a3065740bac8588506051f661" exitCode=0 Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.968751 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f8vvb" event={"ID":"0965fd34-7f35-496d-82c2-ad7a4cfb0d63","Type":"ContainerDied","Data":"618d25672c5c0f17710274816f0723200e9f520a3065740bac8588506051f661"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.972642 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerStarted","Data":"5379910b48fc1ebdae2f3fa9f537dcf35f365339c391bfe25023c2fec1e40b0c"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.972679 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerStarted","Data":"b50b4687de8a67b2b06a7b57e4fc5944ae67fded9f5bd1f3da40846a23172fff"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.979298 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerStarted","Data":"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.979416 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.980773 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"92cb41fd-bac8-4ff0-a05e-c2eed4a08830","Type":"ContainerStarted","Data":"5a4cefe36ba5167adbe690485fbdf4eaf246ac99bc536213997c576d5e2fb37d"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.980899 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5a4cefe36ba5167adbe690485fbdf4eaf246ac99bc536213997c576d5e2fb37d" gracePeriod=30 Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.985566 4988 generic.go:334] "Generic (PLEG): container finished" podID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerID="175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad" exitCode=0 Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.985626 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerDied","Data":"175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad"} Nov 23 07:08:52 crc kubenswrapper[4988]: I1123 07:08:52.985657 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerStarted","Data":"fdfcc16439795a14e64e1805c8a3cf0b3515981547b581f838f43917a7d10e86"} Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.010118 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.459464641 podStartE2EDuration="10.010102612s" podCreationTimestamp="2025-11-23 07:08:43 +0000 UTC" firstStartedPulling="2025-11-23 07:08:44.214805029 +0000 UTC m=+1376.523317792" lastFinishedPulling="2025-11-23 07:08:51.765443 +0000 UTC m=+1384.073955763" observedRunningTime="2025-11-23 07:08:53.007060889 +0000 UTC m=+1385.315573672" watchObservedRunningTime="2025-11-23 07:08:53.010102612 +0000 UTC m=+1385.318615385" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.011717 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.6796124519999998 podStartE2EDuration="11.011710321s" podCreationTimestamp="2025-11-23 07:08:42 +0000 UTC" firstStartedPulling="2025-11-23 07:08:44.433585787 +0000 UTC m=+1376.742098550" lastFinishedPulling="2025-11-23 07:08:51.765683656 +0000 UTC m=+1384.074196419" observedRunningTime="2025-11-23 07:08:52.988519654 +0000 UTC m=+1385.297032427" watchObservedRunningTime="2025-11-23 07:08:53.011710321 +0000 UTC m=+1385.320223094" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.078179 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.524101821 podStartE2EDuration="11.078157934s" podCreationTimestamp="2025-11-23 07:08:42 +0000 UTC" firstStartedPulling="2025-11-23 07:08:44.234372469 +0000 UTC m=+1376.542885232" lastFinishedPulling="2025-11-23 07:08:51.788428572 +0000 UTC m=+1384.096941345" observedRunningTime="2025-11-23 07:08:53.069075436 +0000 UTC m=+1385.377588229" watchObservedRunningTime="2025-11-23 07:08:53.078157934 +0000 UTC m=+1385.386670707" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.134464 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.112073806 podStartE2EDuration="12.134434054s" podCreationTimestamp="2025-11-23 07:08:41 +0000 UTC" firstStartedPulling="2025-11-23 07:08:42.628601877 +0000 UTC m=+1374.937114650" lastFinishedPulling="2025-11-23 07:08:51.650962125 +0000 UTC m=+1383.959474898" observedRunningTime="2025-11-23 07:08:53.128881921 +0000 UTC m=+1385.437394684" watchObservedRunningTime="2025-11-23 07:08:53.134434054 +0000 UTC m=+1385.442946827" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.162310 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.802763425 podStartE2EDuration="10.162283322s" podCreationTimestamp="2025-11-23 07:08:43 +0000 UTC" firstStartedPulling="2025-11-23 07:08:44.434061438 +0000 UTC m=+1376.742574201" lastFinishedPulling="2025-11-23 07:08:51.793581315 +0000 UTC m=+1384.102094098" observedRunningTime="2025-11-23 07:08:53.151464042 +0000 UTC m=+1385.459976805" watchObservedRunningTime="2025-11-23 07:08:53.162283322 +0000 UTC m=+1385.470796085" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.292691 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.292749 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.307475 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.307519 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.338673 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.393722 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.394157 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.446645 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.860454 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.919698 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:08:53 crc kubenswrapper[4988]: I1123 07:08:53.919979 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="dnsmasq-dns" containerID="cri-o://7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa" gracePeriod=10 Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.000895 4988 generic.go:334] "Generic (PLEG): container finished" podID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerID="63f613a721d537ea4a6ffd787b180d4d5ff30b9232cb7ac04da6abb7c20801b3" exitCode=143 Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.001125 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerDied","Data":"63f613a721d537ea4a6ffd787b180d4d5ff30b9232cb7ac04da6abb7c20801b3"} Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.088804 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.337342 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.378478 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.697375 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.800516 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts\") pod \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.800634 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr7rg\" (UniqueName: \"kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg\") pod \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.800683 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data\") pod \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.800735 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle\") pod \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\" (UID: \"0965fd34-7f35-496d-82c2-ad7a4cfb0d63\") " Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.808081 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg" (OuterVolumeSpecName: "kube-api-access-sr7rg") pod "0965fd34-7f35-496d-82c2-ad7a4cfb0d63" (UID: "0965fd34-7f35-496d-82c2-ad7a4cfb0d63"). InnerVolumeSpecName "kube-api-access-sr7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.825380 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts" (OuterVolumeSpecName: "scripts") pod "0965fd34-7f35-496d-82c2-ad7a4cfb0d63" (UID: "0965fd34-7f35-496d-82c2-ad7a4cfb0d63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.843351 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data" (OuterVolumeSpecName: "config-data") pod "0965fd34-7f35-496d-82c2-ad7a4cfb0d63" (UID: "0965fd34-7f35-496d-82c2-ad7a4cfb0d63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.846405 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0965fd34-7f35-496d-82c2-ad7a4cfb0d63" (UID: "0965fd34-7f35-496d-82c2-ad7a4cfb0d63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.902789 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.902816 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr7rg\" (UniqueName: \"kubernetes.io/projected/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-kube-api-access-sr7rg\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.902828 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.902837 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965fd34-7f35-496d-82c2-ad7a4cfb0d63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:54 crc kubenswrapper[4988]: I1123 07:08:54.965573 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.013002 4988 generic.go:334] "Generic (PLEG): container finished" podID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerID="7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa" exitCode=0 Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.013143 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.014113 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" event={"ID":"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be","Type":"ContainerDied","Data":"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa"} Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.014164 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-gtsjs" event={"ID":"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be","Type":"ContainerDied","Data":"5482d01fd0ee1e1b934f2c596dca9f2d79b97dfaff5e254466b76ce23860c8d5"} Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.014185 4988 scope.go:117] "RemoveContainer" containerID="7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.020328 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-f8vvb" event={"ID":"0965fd34-7f35-496d-82c2-ad7a4cfb0d63","Type":"ContainerDied","Data":"a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759"} Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.020376 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d7cf543559b96b0d8c0348ba61b3d9db9467c7374f5c8810268748f0862759" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.021177 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-f8vvb" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.026708 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerStarted","Data":"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278"} Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.040762 4988 scope.go:117] "RemoveContainer" containerID="24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.105924 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.106003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.106096 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5wvc\" (UniqueName: \"kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.106166 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.106548 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.106598 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0\") pod \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\" (UID: \"ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be\") " Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.117595 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc" (OuterVolumeSpecName: "kube-api-access-s5wvc") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "kube-api-access-s5wvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.210637 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5wvc\" (UniqueName: \"kubernetes.io/projected/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-kube-api-access-s5wvc\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.233912 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.234135 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.254811 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.269883 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.287475 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.296672 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config" (OuterVolumeSpecName: "config") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.296725 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" (UID: "ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.314646 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.314683 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.314693 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.314702 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.314710 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.320145 4988 scope.go:117] "RemoveContainer" containerID="7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa" Nov 23 07:08:55 crc kubenswrapper[4988]: E1123 07:08:55.323258 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa\": container with ID starting with 7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa not found: ID does not exist" containerID="7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.323301 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa"} err="failed to get container status \"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa\": rpc error: code = NotFound desc = could not find container \"7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa\": container with ID starting with 7caabec297a84b1fb7791fc89df8f4ebb4438255969fd039fb3eec164bc6fafa not found: ID does not exist" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.323332 4988 scope.go:117] "RemoveContainer" containerID="24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7" Nov 23 07:08:55 crc kubenswrapper[4988]: E1123 07:08:55.324431 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7\": container with ID starting with 24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7 not found: ID does not exist" containerID="24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.324454 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7"} err="failed to get container status \"24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7\": rpc error: code = NotFound desc = could not find container \"24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7\": container with ID starting with 24dd726063995a13f01f3c83ea967a3717182647ad601c99179704300709e1f7 not found: ID does not exist" Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.368534 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:08:55 crc kubenswrapper[4988]: I1123 07:08:55.377502 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-gtsjs"] Nov 23 07:08:56 crc kubenswrapper[4988]: I1123 07:08:56.040850 4988 generic.go:334] "Generic (PLEG): container finished" podID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerID="3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278" exitCode=0 Nov 23 07:08:56 crc kubenswrapper[4988]: I1123 07:08:56.040929 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerDied","Data":"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278"} Nov 23 07:08:56 crc kubenswrapper[4988]: I1123 07:08:56.043079 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-log" containerID="cri-o://b50b4687de8a67b2b06a7b57e4fc5944ae67fded9f5bd1f3da40846a23172fff" gracePeriod=30 Nov 23 07:08:56 crc kubenswrapper[4988]: I1123 07:08:56.043119 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-api" containerID="cri-o://5379910b48fc1ebdae2f3fa9f537dcf35f365339c391bfe25023c2fec1e40b0c" gracePeriod=30 Nov 23 07:08:56 crc kubenswrapper[4988]: I1123 07:08:56.507720 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" path="/var/lib/kubelet/pods/ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be/volumes" Nov 23 07:08:57 crc kubenswrapper[4988]: I1123 07:08:57.055819 4988 generic.go:334] "Generic (PLEG): container finished" podID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerID="b50b4687de8a67b2b06a7b57e4fc5944ae67fded9f5bd1f3da40846a23172fff" exitCode=143 Nov 23 07:08:57 crc kubenswrapper[4988]: I1123 07:08:57.055918 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerDied","Data":"b50b4687de8a67b2b06a7b57e4fc5944ae67fded9f5bd1f3da40846a23172fff"} Nov 23 07:08:57 crc kubenswrapper[4988]: I1123 07:08:57.056298 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerName="nova-scheduler-scheduler" containerID="cri-o://139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" gracePeriod=30 Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.065661 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerStarted","Data":"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94"} Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.092655 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brwzb" podStartSLOduration=4.317538746 podStartE2EDuration="9.092635229s" podCreationTimestamp="2025-11-23 07:08:49 +0000 UTC" firstStartedPulling="2025-11-23 07:08:52.989487538 +0000 UTC m=+1385.298000341" lastFinishedPulling="2025-11-23 07:08:57.764584061 +0000 UTC m=+1390.073096824" observedRunningTime="2025-11-23 07:08:58.087088426 +0000 UTC m=+1390.395601199" watchObservedRunningTime="2025-11-23 07:08:58.092635229 +0000 UTC m=+1390.401147992" Nov 23 07:08:58 crc kubenswrapper[4988]: E1123 07:08:58.309049 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c is running failed: container process not found" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:08:58 crc kubenswrapper[4988]: E1123 07:08:58.309582 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c is running failed: container process not found" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:08:58 crc kubenswrapper[4988]: E1123 07:08:58.309986 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c is running failed: container process not found" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:08:58 crc kubenswrapper[4988]: E1123 07:08:58.310035 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerName="nova-scheduler-scheduler" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.541809 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.681474 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4zk5\" (UniqueName: \"kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5\") pod \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.681541 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data\") pod \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.681582 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle\") pod \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\" (UID: \"7cc76871-98cf-4a82-b4ea-868c11bf18ca\") " Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.689793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5" (OuterVolumeSpecName: "kube-api-access-h4zk5") pod "7cc76871-98cf-4a82-b4ea-868c11bf18ca" (UID: "7cc76871-98cf-4a82-b4ea-868c11bf18ca"). InnerVolumeSpecName "kube-api-access-h4zk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.710185 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cc76871-98cf-4a82-b4ea-868c11bf18ca" (UID: "7cc76871-98cf-4a82-b4ea-868c11bf18ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.735965 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data" (OuterVolumeSpecName: "config-data") pod "7cc76871-98cf-4a82-b4ea-868c11bf18ca" (UID: "7cc76871-98cf-4a82-b4ea-868c11bf18ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.783523 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4zk5\" (UniqueName: \"kubernetes.io/projected/7cc76871-98cf-4a82-b4ea-868c11bf18ca-kube-api-access-h4zk5\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.783756 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:58 crc kubenswrapper[4988]: I1123 07:08:58.783846 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc76871-98cf-4a82-b4ea-868c11bf18ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.081016 4988 generic.go:334] "Generic (PLEG): container finished" podID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" exitCode=0 Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.081108 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.081122 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cc76871-98cf-4a82-b4ea-868c11bf18ca","Type":"ContainerDied","Data":"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c"} Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.081238 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7cc76871-98cf-4a82-b4ea-868c11bf18ca","Type":"ContainerDied","Data":"e7460c407560fb2abe7dc4049ff2cbf252624d5399bf3d65c727b6d4dfaf5a73"} Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.081259 4988 scope.go:117] "RemoveContainer" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.106433 4988 scope.go:117] "RemoveContainer" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" Nov 23 07:08:59 crc kubenswrapper[4988]: E1123 07:08:59.106852 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c\": container with ID starting with 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c not found: ID does not exist" containerID="139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.106881 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c"} err="failed to get container status \"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c\": rpc error: code = NotFound desc = could not find container \"139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c\": container with ID starting with 139451b9c984abe18d6117a119cc172d7371abfdaa932f5d154520a799d0737c not found: ID does not exist" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.122615 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.137768 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.147513 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:59 crc kubenswrapper[4988]: E1123 07:08:59.148037 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerName="nova-scheduler-scheduler" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.148059 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerName="nova-scheduler-scheduler" Nov 23 07:08:59 crc kubenswrapper[4988]: E1123 07:08:59.148082 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="dnsmasq-dns" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.148090 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="dnsmasq-dns" Nov 23 07:08:59 crc kubenswrapper[4988]: E1123 07:08:59.148112 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="init" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.148122 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="init" Nov 23 07:08:59 crc kubenswrapper[4988]: E1123 07:08:59.148141 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965fd34-7f35-496d-82c2-ad7a4cfb0d63" containerName="nova-manage" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.148148 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965fd34-7f35-496d-82c2-ad7a4cfb0d63" containerName="nova-manage" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.148905 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1b54b3-f8a6-4e8e-aadb-4080cb1ca1be" containerName="dnsmasq-dns" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.149012 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" containerName="nova-scheduler-scheduler" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.149138 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0965fd34-7f35-496d-82c2-ad7a4cfb0d63" containerName="nova-manage" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.150049 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.155990 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.157018 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.291685 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wncr\" (UniqueName: \"kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.291836 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.291881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.393355 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.393436 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.393506 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wncr\" (UniqueName: \"kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.398940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.409029 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.412649 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wncr\" (UniqueName: \"kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr\") pod \"nova-scheduler-0\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.469970 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.495774 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.496069 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:08:59 crc kubenswrapper[4988]: W1123 07:08:59.933084 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod330518a2_2989_41d3_9a9f_620205d70a7a.slice/crio-af0cfd2c56dbb6d7be720fffffa323cea5e6e93ac84be71b5292c1cfe6482e8c WatchSource:0}: Error finding container af0cfd2c56dbb6d7be720fffffa323cea5e6e93ac84be71b5292c1cfe6482e8c: Status 404 returned error can't find the container with id af0cfd2c56dbb6d7be720fffffa323cea5e6e93ac84be71b5292c1cfe6482e8c Nov 23 07:08:59 crc kubenswrapper[4988]: I1123 07:08:59.935052 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:00 crc kubenswrapper[4988]: I1123 07:09:00.091473 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"330518a2-2989-41d3-9a9f-620205d70a7a","Type":"ContainerStarted","Data":"af0cfd2c56dbb6d7be720fffffa323cea5e6e93ac84be71b5292c1cfe6482e8c"} Nov 23 07:09:00 crc kubenswrapper[4988]: I1123 07:09:00.512660 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc76871-98cf-4a82-b4ea-868c11bf18ca" path="/var/lib/kubelet/pods/7cc76871-98cf-4a82-b4ea-868c11bf18ca/volumes" Nov 23 07:09:00 crc kubenswrapper[4988]: I1123 07:09:00.543110 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-brwzb" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="registry-server" probeResult="failure" output=< Nov 23 07:09:00 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 07:09:00 crc kubenswrapper[4988]: > Nov 23 07:09:01 crc kubenswrapper[4988]: I1123 07:09:01.109126 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"330518a2-2989-41d3-9a9f-620205d70a7a","Type":"ContainerStarted","Data":"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1"} Nov 23 07:09:01 crc kubenswrapper[4988]: I1123 07:09:01.133708 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.133680765 podStartE2EDuration="2.133680765s" podCreationTimestamp="2025-11-23 07:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:01.125637462 +0000 UTC m=+1393.434150245" watchObservedRunningTime="2025-11-23 07:09:01.133680765 +0000 UTC m=+1393.442193518" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.118635 4988 generic.go:334] "Generic (PLEG): container finished" podID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerID="5379910b48fc1ebdae2f3fa9f537dcf35f365339c391bfe25023c2fec1e40b0c" exitCode=0 Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.120066 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerDied","Data":"5379910b48fc1ebdae2f3fa9f537dcf35f365339c391bfe25023c2fec1e40b0c"} Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.120099 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c4598db-ddad-468c-ab6d-adcb4552bc0d","Type":"ContainerDied","Data":"bd252801de8f640604a6b316dea79e4daed9673582251b35e37975cd36ec0a5d"} Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.120110 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd252801de8f640604a6b316dea79e4daed9673582251b35e37975cd36ec0a5d" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.130381 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.249757 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data\") pod \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.250006 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqsfb\" (UniqueName: \"kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb\") pod \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.250156 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle\") pod \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.250347 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs\") pod \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\" (UID: \"4c4598db-ddad-468c-ab6d-adcb4552bc0d\") " Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.252597 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs" (OuterVolumeSpecName: "logs") pod "4c4598db-ddad-468c-ab6d-adcb4552bc0d" (UID: "4c4598db-ddad-468c-ab6d-adcb4552bc0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.264068 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb" (OuterVolumeSpecName: "kube-api-access-lqsfb") pod "4c4598db-ddad-468c-ab6d-adcb4552bc0d" (UID: "4c4598db-ddad-468c-ab6d-adcb4552bc0d"). InnerVolumeSpecName "kube-api-access-lqsfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.291680 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c4598db-ddad-468c-ab6d-adcb4552bc0d" (UID: "4c4598db-ddad-468c-ab6d-adcb4552bc0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.299085 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data" (OuterVolumeSpecName: "config-data") pod "4c4598db-ddad-468c-ab6d-adcb4552bc0d" (UID: "4c4598db-ddad-468c-ab6d-adcb4552bc0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.353116 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c4598db-ddad-468c-ab6d-adcb4552bc0d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.353168 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.353210 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqsfb\" (UniqueName: \"kubernetes.io/projected/4c4598db-ddad-468c-ab6d-adcb4552bc0d-kube-api-access-lqsfb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:02 crc kubenswrapper[4988]: I1123 07:09:02.353230 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c4598db-ddad-468c-ab6d-adcb4552bc0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.129700 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.156693 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.165576 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.191622 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:03 crc kubenswrapper[4988]: E1123 07:09:03.192027 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-api" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.192052 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-api" Nov 23 07:09:03 crc kubenswrapper[4988]: E1123 07:09:03.192067 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-log" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.192073 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-log" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.192270 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-log" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.192297 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" containerName="nova-api-api" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.193255 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.205602 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.218239 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.274900 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhfs\" (UniqueName: \"kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.275009 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.275062 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.275146 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.377607 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhfs\" (UniqueName: \"kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.377710 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.377916 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.378033 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.379325 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.382579 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.386960 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.396039 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhfs\" (UniqueName: \"kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs\") pod \"nova-api-0\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.516133 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:03 crc kubenswrapper[4988]: I1123 07:09:03.993170 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:03 crc kubenswrapper[4988]: W1123 07:09:03.995841 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2290f52c_4faa_402c_b432_e2b0626006d1.slice/crio-2cb87e786be41527950d4666a38c8711873fdf39cc80f618c534f75c1a9699b8 WatchSource:0}: Error finding container 2cb87e786be41527950d4666a38c8711873fdf39cc80f618c534f75c1a9699b8: Status 404 returned error can't find the container with id 2cb87e786be41527950d4666a38c8711873fdf39cc80f618c534f75c1a9699b8 Nov 23 07:09:04 crc kubenswrapper[4988]: I1123 07:09:04.140511 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerStarted","Data":"2cb87e786be41527950d4666a38c8711873fdf39cc80f618c534f75c1a9699b8"} Nov 23 07:09:04 crc kubenswrapper[4988]: I1123 07:09:04.470583 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:09:04 crc kubenswrapper[4988]: I1123 07:09:04.506423 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4598db-ddad-468c-ab6d-adcb4552bc0d" path="/var/lib/kubelet/pods/4c4598db-ddad-468c-ab6d-adcb4552bc0d/volumes" Nov 23 07:09:05 crc kubenswrapper[4988]: I1123 07:09:05.152253 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerStarted","Data":"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026"} Nov 23 07:09:05 crc kubenswrapper[4988]: I1123 07:09:05.152321 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerStarted","Data":"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd"} Nov 23 07:09:05 crc kubenswrapper[4988]: I1123 07:09:05.179267 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.179246132 podStartE2EDuration="2.179246132s" podCreationTimestamp="2025-11-23 07:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:05.177972272 +0000 UTC m=+1397.486485035" watchObservedRunningTime="2025-11-23 07:09:05.179246132 +0000 UTC m=+1397.487758905" Nov 23 07:09:08 crc kubenswrapper[4988]: I1123 07:09:08.185762 4988 generic.go:334] "Generic (PLEG): container finished" podID="a1ba06f4-14ba-421e-85ab-f9a593f7c60c" containerID="4e6fe2366cf0433936682d32ab254792a68eff37687d4d89181ec0d51fed8967" exitCode=0 Nov 23 07:09:08 crc kubenswrapper[4988]: I1123 07:09:08.185860 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" event={"ID":"a1ba06f4-14ba-421e-85ab-f9a593f7c60c","Type":"ContainerDied","Data":"4e6fe2366cf0433936682d32ab254792a68eff37687d4d89181ec0d51fed8967"} Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.470772 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.514082 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.543575 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.552988 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.597154 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle\") pod \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.597351 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts\") pod \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.597549 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47gzh\" (UniqueName: \"kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh\") pod \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.597686 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data\") pod \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\" (UID: \"a1ba06f4-14ba-421e-85ab-f9a593f7c60c\") " Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.600332 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.608522 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts" (OuterVolumeSpecName: "scripts") pod "a1ba06f4-14ba-421e-85ab-f9a593f7c60c" (UID: "a1ba06f4-14ba-421e-85ab-f9a593f7c60c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.610483 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh" (OuterVolumeSpecName: "kube-api-access-47gzh") pod "a1ba06f4-14ba-421e-85ab-f9a593f7c60c" (UID: "a1ba06f4-14ba-421e-85ab-f9a593f7c60c"). InnerVolumeSpecName "kube-api-access-47gzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.632262 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data" (OuterVolumeSpecName: "config-data") pod "a1ba06f4-14ba-421e-85ab-f9a593f7c60c" (UID: "a1ba06f4-14ba-421e-85ab-f9a593f7c60c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.638333 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1ba06f4-14ba-421e-85ab-f9a593f7c60c" (UID: "a1ba06f4-14ba-421e-85ab-f9a593f7c60c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.699417 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.699452 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.699464 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.699472 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47gzh\" (UniqueName: \"kubernetes.io/projected/a1ba06f4-14ba-421e-85ab-f9a593f7c60c-kube-api-access-47gzh\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:09 crc kubenswrapper[4988]: I1123 07:09:09.794301 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.208979 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" event={"ID":"a1ba06f4-14ba-421e-85ab-f9a593f7c60c","Type":"ContainerDied","Data":"9c8ff6d4edb70f9338857506d551379627b42a4a291e6426160b094e679a83e4"} Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.209079 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zvkhn" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.209087 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c8ff6d4edb70f9338857506d551379627b42a4a291e6426160b094e679a83e4" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.244504 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.322073 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:09:10 crc kubenswrapper[4988]: E1123 07:09:10.322563 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ba06f4-14ba-421e-85ab-f9a593f7c60c" containerName="nova-cell1-conductor-db-sync" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.322584 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ba06f4-14ba-421e-85ab-f9a593f7c60c" containerName="nova-cell1-conductor-db-sync" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.322777 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ba06f4-14ba-421e-85ab-f9a593f7c60c" containerName="nova-cell1-conductor-db-sync" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.323675 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.326059 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.338991 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.411386 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.411529 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5w2t\" (UniqueName: \"kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.411572 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.513789 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.513925 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.514008 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5w2t\" (UniqueName: \"kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.519484 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.526347 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.533948 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5w2t\" (UniqueName: \"kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t\") pod \"nova-cell1-conductor-0\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:10 crc kubenswrapper[4988]: I1123 07:09:10.641604 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.167371 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.221849 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"43d09b31-ee49-498b-bbaf-368e53723f62","Type":"ContainerStarted","Data":"1c3cd1d6db4c354043571ed95c1110021a826f29ac6346e0bb4a917df6ef9cb5"} Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.222539 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brwzb" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="registry-server" containerID="cri-o://bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94" gracePeriod=2 Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.735345 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.841106 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z87cb\" (UniqueName: \"kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb\") pod \"e923f972-72cd-4605-a782-0f13ba67b9fb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.841320 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities\") pod \"e923f972-72cd-4605-a782-0f13ba67b9fb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.841463 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content\") pod \"e923f972-72cd-4605-a782-0f13ba67b9fb\" (UID: \"e923f972-72cd-4605-a782-0f13ba67b9fb\") " Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.842113 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities" (OuterVolumeSpecName: "utilities") pod "e923f972-72cd-4605-a782-0f13ba67b9fb" (UID: "e923f972-72cd-4605-a782-0f13ba67b9fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.845642 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb" (OuterVolumeSpecName: "kube-api-access-z87cb") pod "e923f972-72cd-4605-a782-0f13ba67b9fb" (UID: "e923f972-72cd-4605-a782-0f13ba67b9fb"). InnerVolumeSpecName "kube-api-access-z87cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.945347 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z87cb\" (UniqueName: \"kubernetes.io/projected/e923f972-72cd-4605-a782-0f13ba67b9fb-kube-api-access-z87cb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.945559 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:11 crc kubenswrapper[4988]: I1123 07:09:11.946574 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e923f972-72cd-4605-a782-0f13ba67b9fb" (UID: "e923f972-72cd-4605-a782-0f13ba67b9fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.047735 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e923f972-72cd-4605-a782-0f13ba67b9fb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.168964 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.230416 4988 generic.go:334] "Generic (PLEG): container finished" podID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerID="bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94" exitCode=0 Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.230469 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerDied","Data":"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94"} Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.230494 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brwzb" event={"ID":"e923f972-72cd-4605-a782-0f13ba67b9fb","Type":"ContainerDied","Data":"fdfcc16439795a14e64e1805c8a3cf0b3515981547b581f838f43917a7d10e86"} Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.230511 4988 scope.go:117] "RemoveContainer" containerID="bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.230627 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brwzb" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.259945 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"43d09b31-ee49-498b-bbaf-368e53723f62","Type":"ContainerStarted","Data":"b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75"} Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.260522 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.262374 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.269446 4988 scope.go:117] "RemoveContainer" containerID="3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.271737 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brwzb"] Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.292394 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.29236887 podStartE2EDuration="2.29236887s" podCreationTimestamp="2025-11-23 07:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:12.281156681 +0000 UTC m=+1404.589669464" watchObservedRunningTime="2025-11-23 07:09:12.29236887 +0000 UTC m=+1404.600881653" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.310850 4988 scope.go:117] "RemoveContainer" containerID="175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.354391 4988 scope.go:117] "RemoveContainer" containerID="bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94" Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.358295 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94\": container with ID starting with bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94 not found: ID does not exist" containerID="bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.358338 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94"} err="failed to get container status \"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94\": rpc error: code = NotFound desc = could not find container \"bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94\": container with ID starting with bad824b1ede50ec306d01252fa899a9caaab78a042d47bb11526326d04a70b94 not found: ID does not exist" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.358368 4988 scope.go:117] "RemoveContainer" containerID="3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278" Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.358779 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278\": container with ID starting with 3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278 not found: ID does not exist" containerID="3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.358847 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278"} err="failed to get container status \"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278\": rpc error: code = NotFound desc = could not find container \"3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278\": container with ID starting with 3e3c48c7b4f8758157cd6e0149aed22f4568d804d358ec7857ccbf4754a85278 not found: ID does not exist" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.358889 4988 scope.go:117] "RemoveContainer" containerID="175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad" Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.359185 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad\": container with ID starting with 175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad not found: ID does not exist" containerID="175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.359325 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad"} err="failed to get container status \"175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad\": rpc error: code = NotFound desc = could not find container \"175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad\": container with ID starting with 175462be10af14ae31e16252bdfbd1fe02f026d97c882aca1a4b38d29c7961ad not found: ID does not exist" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.507944 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" path="/var/lib/kubelet/pods/e923f972-72cd-4605-a782-0f13ba67b9fb/volumes" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.605655 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.606178 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="extract-utilities" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.606213 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="extract-utilities" Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.606223 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="extract-content" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.606232 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="extract-content" Nov 23 07:09:12 crc kubenswrapper[4988]: E1123 07:09:12.606255 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="registry-server" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.606263 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="registry-server" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.606523 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e923f972-72cd-4605-a782-0f13ba67b9fb" containerName="registry-server" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.608102 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.633414 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.667753 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.667878 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.667934 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5bn\" (UniqueName: \"kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.770659 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.770739 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5bn\" (UniqueName: \"kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.770845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.771398 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.771628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.789659 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5bn\" (UniqueName: \"kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn\") pod \"redhat-marketplace-v9shd\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:12 crc kubenswrapper[4988]: I1123 07:09:12.925039 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:13 crc kubenswrapper[4988]: I1123 07:09:13.411176 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:13 crc kubenswrapper[4988]: I1123 07:09:13.517417 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:09:13 crc kubenswrapper[4988]: I1123 07:09:13.517470 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:09:14 crc kubenswrapper[4988]: I1123 07:09:14.279760 4988 generic.go:334] "Generic (PLEG): container finished" podID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerID="62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984" exitCode=0 Nov 23 07:09:14 crc kubenswrapper[4988]: I1123 07:09:14.280125 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerDied","Data":"62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984"} Nov 23 07:09:14 crc kubenswrapper[4988]: I1123 07:09:14.280156 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerStarted","Data":"773c6b05ef1f0c89a7f59eec66a879fe18527daf1d00e4ccb14d0532b16080e9"} Nov 23 07:09:14 crc kubenswrapper[4988]: I1123 07:09:14.599400 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:14 crc kubenswrapper[4988]: I1123 07:09:14.599465 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:15 crc kubenswrapper[4988]: I1123 07:09:15.292973 4988 generic.go:334] "Generic (PLEG): container finished" podID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerID="96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7" exitCode=0 Nov 23 07:09:15 crc kubenswrapper[4988]: I1123 07:09:15.293107 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerDied","Data":"96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7"} Nov 23 07:09:16 crc kubenswrapper[4988]: I1123 07:09:16.307569 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerStarted","Data":"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342"} Nov 23 07:09:16 crc kubenswrapper[4988]: I1123 07:09:16.330690 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v9shd" podStartSLOduration=2.960904072 podStartE2EDuration="4.330667853s" podCreationTimestamp="2025-11-23 07:09:12 +0000 UTC" firstStartedPulling="2025-11-23 07:09:14.282299366 +0000 UTC m=+1406.590812139" lastFinishedPulling="2025-11-23 07:09:15.652063157 +0000 UTC m=+1407.960575920" observedRunningTime="2025-11-23 07:09:16.323599283 +0000 UTC m=+1408.632112056" watchObservedRunningTime="2025-11-23 07:09:16.330667853 +0000 UTC m=+1408.639180616" Nov 23 07:09:20 crc kubenswrapper[4988]: I1123 07:09:20.683816 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 07:09:22 crc kubenswrapper[4988]: I1123 07:09:22.926879 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:22 crc kubenswrapper[4988]: I1123 07:09:22.926936 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.012583 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.397512 4988 generic.go:334] "Generic (PLEG): container finished" podID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" containerID="5a4cefe36ba5167adbe690485fbdf4eaf246ac99bc536213997c576d5e2fb37d" exitCode=137 Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.397623 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"92cb41fd-bac8-4ff0-a05e-c2eed4a08830","Type":"ContainerDied","Data":"5a4cefe36ba5167adbe690485fbdf4eaf246ac99bc536213997c576d5e2fb37d"} Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.398236 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"92cb41fd-bac8-4ff0-a05e-c2eed4a08830","Type":"ContainerDied","Data":"c1acbc5819e8ef46a9f62485b0d91c6b5392038aa93ce4fd69b34d98f50d3cb7"} Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.398328 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1acbc5819e8ef46a9f62485b0d91c6b5392038aa93ce4fd69b34d98f50d3cb7" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.400408 4988 generic.go:334] "Generic (PLEG): container finished" podID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerID="8e3c9d2fec29812fc7ace8e08e11d94ae904a84c512878a8526091d5bb35f819" exitCode=137 Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.400551 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerDied","Data":"8e3c9d2fec29812fc7ace8e08e11d94ae904a84c512878a8526091d5bb35f819"} Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.400634 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bcc3822-e89b-4c04-8822-5a188dd6eabc","Type":"ContainerDied","Data":"0f0acdc9be3408df562665e966969ead03650d38954a8226ee70be545f8794a7"} Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.400727 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0acdc9be3408df562665e966969ead03650d38954a8226ee70be545f8794a7" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.459650 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.461998 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.465049 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490545 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle\") pod \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490641 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data\") pod \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490710 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrb22\" (UniqueName: \"kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22\") pod \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490785 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle\") pod \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d82pp\" (UniqueName: \"kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp\") pod \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490898 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs\") pod \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\" (UID: \"9bcc3822-e89b-4c04-8822-5a188dd6eabc\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.490935 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data\") pod \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\" (UID: \"92cb41fd-bac8-4ff0-a05e-c2eed4a08830\") " Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.492851 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs" (OuterVolumeSpecName: "logs") pod "9bcc3822-e89b-4c04-8822-5a188dd6eabc" (UID: "9bcc3822-e89b-4c04-8822-5a188dd6eabc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.498393 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22" (OuterVolumeSpecName: "kube-api-access-nrb22") pod "92cb41fd-bac8-4ff0-a05e-c2eed4a08830" (UID: "92cb41fd-bac8-4ff0-a05e-c2eed4a08830"). InnerVolumeSpecName "kube-api-access-nrb22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.504506 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp" (OuterVolumeSpecName: "kube-api-access-d82pp") pod "9bcc3822-e89b-4c04-8822-5a188dd6eabc" (UID: "9bcc3822-e89b-4c04-8822-5a188dd6eabc"). InnerVolumeSpecName "kube-api-access-d82pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.512202 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.524018 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.525160 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.525952 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.528428 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.531754 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data" (OuterVolumeSpecName: "config-data") pod "9bcc3822-e89b-4c04-8822-5a188dd6eabc" (UID: "9bcc3822-e89b-4c04-8822-5a188dd6eabc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.533349 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data" (OuterVolumeSpecName: "config-data") pod "92cb41fd-bac8-4ff0-a05e-c2eed4a08830" (UID: "92cb41fd-bac8-4ff0-a05e-c2eed4a08830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.534311 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bcc3822-e89b-4c04-8822-5a188dd6eabc" (UID: "9bcc3822-e89b-4c04-8822-5a188dd6eabc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.563668 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92cb41fd-bac8-4ff0-a05e-c2eed4a08830" (UID: "92cb41fd-bac8-4ff0-a05e-c2eed4a08830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593252 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593277 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrb22\" (UniqueName: \"kubernetes.io/projected/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-kube-api-access-nrb22\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593287 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593295 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d82pp\" (UniqueName: \"kubernetes.io/projected/9bcc3822-e89b-4c04-8822-5a188dd6eabc-kube-api-access-d82pp\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593303 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bcc3822-e89b-4c04-8822-5a188dd6eabc-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593310 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92cb41fd-bac8-4ff0-a05e-c2eed4a08830-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:23 crc kubenswrapper[4988]: I1123 07:09:23.593318 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bcc3822-e89b-4c04-8822-5a188dd6eabc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.407413 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.407441 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.407699 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.425696 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.444723 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.455967 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.476084 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: E1123 07:09:24.476723 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-metadata" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.476745 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-metadata" Nov 23 07:09:24 crc kubenswrapper[4988]: E1123 07:09:24.476758 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.476766 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:09:24 crc kubenswrapper[4988]: E1123 07:09:24.476800 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-log" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.476807 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-log" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.477055 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.477088 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-log" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.477098 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" containerName="nova-metadata-metadata" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.478056 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.480455 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.480563 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.488524 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.515586 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqz4\" (UniqueName: \"kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.515766 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.515799 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.515825 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.515957 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.521271 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cb41fd-bac8-4ff0-a05e-c2eed4a08830" path="/var/lib/kubelet/pods/92cb41fd-bac8-4ff0-a05e-c2eed4a08830/volumes" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.522602 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.553268 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.568721 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.602467 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.604514 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.608081 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.610423 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619484 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgqz4\" (UniqueName: \"kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619571 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619623 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619649 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619676 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619715 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfsf\" (UniqueName: \"kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619784 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619844 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.619865 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.630706 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.634148 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.636019 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.636982 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.651426 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.653869 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.655422 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.666281 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgqz4\" (UniqueName: \"kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4\") pod \"nova-cell1-novncproxy-0\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.698634 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.721549 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.721605 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.721814 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.721940 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqfsf\" (UniqueName: \"kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722016 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722102 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722128 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722211 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722270 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722303 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.722393 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nnl\" (UniqueName: \"kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.723330 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.725396 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.725783 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.735792 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.739220 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqfsf\" (UniqueName: \"kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf\") pod \"nova-metadata-0\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " pod="openstack/nova-metadata-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.810395 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823750 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nnl\" (UniqueName: \"kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823814 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823870 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823914 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823969 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.823989 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.824976 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.825089 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.825133 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.825373 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.825679 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.841218 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nnl\" (UniqueName: \"kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl\") pod \"dnsmasq-dns-5d7f54fb65-vrk6k\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:24 crc kubenswrapper[4988]: I1123 07:09:24.934659 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.118363 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.314929 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.421129 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v9shd" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="registry-server" containerID="cri-o://1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342" gracePeriod=2 Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.421511 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1","Type":"ContainerStarted","Data":"1357fabbfcd125562c5203f9b929775db08972b449100e02282445af5f753903"} Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.423061 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.616169 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.853026 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.944928 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities\") pod \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.945290 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content\") pod \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.945332 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5bn\" (UniqueName: \"kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn\") pod \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\" (UID: \"c27d856c-9f29-4c96-a825-7d8c2d7f151a\") " Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.949061 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities" (OuterVolumeSpecName: "utilities") pod "c27d856c-9f29-4c96-a825-7d8c2d7f151a" (UID: "c27d856c-9f29-4c96-a825-7d8c2d7f151a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.952609 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn" (OuterVolumeSpecName: "kube-api-access-jl5bn") pod "c27d856c-9f29-4c96-a825-7d8c2d7f151a" (UID: "c27d856c-9f29-4c96-a825-7d8c2d7f151a"). InnerVolumeSpecName "kube-api-access-jl5bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:25 crc kubenswrapper[4988]: I1123 07:09:25.964082 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c27d856c-9f29-4c96-a825-7d8c2d7f151a" (UID: "c27d856c-9f29-4c96-a825-7d8c2d7f151a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.048028 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5bn\" (UniqueName: \"kubernetes.io/projected/c27d856c-9f29-4c96-a825-7d8c2d7f151a-kube-api-access-jl5bn\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.048066 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.048076 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c27d856c-9f29-4c96-a825-7d8c2d7f151a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.439135 4988 generic.go:334] "Generic (PLEG): container finished" podID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerID="3d8943009ea79054fa20ce82f04a3cdc3c352ee9f38c84b121658a7640dd9879" exitCode=0 Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.439486 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" event={"ID":"75f2198a-7d70-4447-b8c2-62ac40b5c167","Type":"ContainerDied","Data":"3d8943009ea79054fa20ce82f04a3cdc3c352ee9f38c84b121658a7640dd9879"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.439620 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" event={"ID":"75f2198a-7d70-4447-b8c2-62ac40b5c167","Type":"ContainerStarted","Data":"5df1c00ec72f082971ce7793ec3e5aa24db3607de6146fde70e4c08cf4fbf0b6"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.442638 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerStarted","Data":"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.442682 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerStarted","Data":"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.442712 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerStarted","Data":"63e176a530038a40bd6827631fda715bde9f4d5a6f4be5e35a2bcb77cd39ab74"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.445544 4988 generic.go:334] "Generic (PLEG): container finished" podID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerID="1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342" exitCode=0 Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.445653 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerDied","Data":"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.445686 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v9shd" event={"ID":"c27d856c-9f29-4c96-a825-7d8c2d7f151a","Type":"ContainerDied","Data":"773c6b05ef1f0c89a7f59eec66a879fe18527daf1d00e4ccb14d0532b16080e9"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.445706 4988 scope.go:117] "RemoveContainer" containerID="1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.445881 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.463711 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1","Type":"ContainerStarted","Data":"09b29081da7241818cdcc74db9b8d720eb74975763b6c6d467e42525454be55b"} Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.483917 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.483898953 podStartE2EDuration="2.483898953s" podCreationTimestamp="2025-11-23 07:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:26.48002353 +0000 UTC m=+1418.788536293" watchObservedRunningTime="2025-11-23 07:09:26.483898953 +0000 UTC m=+1418.792411716" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.518713 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.518694908 podStartE2EDuration="2.518694908s" podCreationTimestamp="2025-11-23 07:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:26.51130781 +0000 UTC m=+1418.819820593" watchObservedRunningTime="2025-11-23 07:09:26.518694908 +0000 UTC m=+1418.827207671" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.518791 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bcc3822-e89b-4c04-8822-5a188dd6eabc" path="/var/lib/kubelet/pods/9bcc3822-e89b-4c04-8822-5a188dd6eabc/volumes" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.646551 4988 scope.go:117] "RemoveContainer" containerID="96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.669831 4988 scope.go:117] "RemoveContainer" containerID="62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.691186 4988 scope.go:117] "RemoveContainer" containerID="1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342" Nov 23 07:09:26 crc kubenswrapper[4988]: E1123 07:09:26.691946 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342\": container with ID starting with 1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342 not found: ID does not exist" containerID="1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.691977 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342"} err="failed to get container status \"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342\": rpc error: code = NotFound desc = could not find container \"1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342\": container with ID starting with 1210dbf30bf7e42c72fbd2e5afae60e336d7397909d482d74a8a3f4ae5af5342 not found: ID does not exist" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.691998 4988 scope.go:117] "RemoveContainer" containerID="96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7" Nov 23 07:09:26 crc kubenswrapper[4988]: E1123 07:09:26.692343 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7\": container with ID starting with 96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7 not found: ID does not exist" containerID="96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.692372 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7"} err="failed to get container status \"96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7\": rpc error: code = NotFound desc = could not find container \"96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7\": container with ID starting with 96613dcc57363e06634666642a5c11d9bb5e1e49c51761078ca8f57a11456af7 not found: ID does not exist" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.692391 4988 scope.go:117] "RemoveContainer" containerID="62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984" Nov 23 07:09:26 crc kubenswrapper[4988]: E1123 07:09:26.692648 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984\": container with ID starting with 62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984 not found: ID does not exist" containerID="62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.692686 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984"} err="failed to get container status \"62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984\": rpc error: code = NotFound desc = could not find container \"62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984\": container with ID starting with 62a605bca9b7ee3cdfca3cb2ee7f028d7ff76e2cbe8ec5e625de50dcd2c62984 not found: ID does not exist" Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.763267 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.763533 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-central-agent" containerID="cri-o://311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3" gracePeriod=30 Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.763656 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="proxy-httpd" containerID="cri-o://5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6" gracePeriod=30 Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.763692 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="sg-core" containerID="cri-o://379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b" gracePeriod=30 Nov 23 07:09:26 crc kubenswrapper[4988]: I1123 07:09:26.763719 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-notification-agent" containerID="cri-o://2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7" gracePeriod=30 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.353843 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468808 4988 generic.go:334] "Generic (PLEG): container finished" podID="1268b979-7e19-49a3-a72b-7361a801fb98" containerID="5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6" exitCode=0 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468865 4988 generic.go:334] "Generic (PLEG): container finished" podID="1268b979-7e19-49a3-a72b-7361a801fb98" containerID="379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b" exitCode=2 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468883 4988 generic.go:334] "Generic (PLEG): container finished" podID="1268b979-7e19-49a3-a72b-7361a801fb98" containerID="311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3" exitCode=0 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468896 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerDied","Data":"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6"} Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468947 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerDied","Data":"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b"} Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.468962 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerDied","Data":"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3"} Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.473048 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" event={"ID":"75f2198a-7d70-4447-b8c2-62ac40b5c167","Type":"ContainerStarted","Data":"d51442f2112dafeba9e3beeda4d0051ee6813f10a3f4230f01f59b8bc141e8ed"} Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.473124 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-log" containerID="cri-o://20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd" gracePeriod=30 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.473314 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-api" containerID="cri-o://4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026" gracePeriod=30 Nov 23 07:09:27 crc kubenswrapper[4988]: I1123 07:09:27.504036 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" podStartSLOduration=3.504018689 podStartE2EDuration="3.504018689s" podCreationTimestamp="2025-11-23 07:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:27.499606763 +0000 UTC m=+1419.808119526" watchObservedRunningTime="2025-11-23 07:09:27.504018689 +0000 UTC m=+1419.812531452" Nov 23 07:09:28 crc kubenswrapper[4988]: I1123 07:09:28.483492 4988 generic.go:334] "Generic (PLEG): container finished" podID="2290f52c-4faa-402c-b432-e2b0626006d1" containerID="20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd" exitCode=143 Nov 23 07:09:28 crc kubenswrapper[4988]: I1123 07:09:28.483694 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerDied","Data":"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd"} Nov 23 07:09:28 crc kubenswrapper[4988]: I1123 07:09:28.484660 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.148425 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209180 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209324 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209372 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khfjm\" (UniqueName: \"kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209425 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209477 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209584 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209624 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.209658 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs\") pod \"1268b979-7e19-49a3-a72b-7361a801fb98\" (UID: \"1268b979-7e19-49a3-a72b-7361a801fb98\") " Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.210987 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.211330 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.215710 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm" (OuterVolumeSpecName: "kube-api-access-khfjm") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "kube-api-access-khfjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.216385 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts" (OuterVolumeSpecName: "scripts") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.262490 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.291856 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311883 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311926 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khfjm\" (UniqueName: \"kubernetes.io/projected/1268b979-7e19-49a3-a72b-7361a801fb98-kube-api-access-khfjm\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311940 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311950 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311961 4988 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.311971 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1268b979-7e19-49a3-a72b-7361a801fb98-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.320826 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.341026 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data" (OuterVolumeSpecName: "config-data") pod "1268b979-7e19-49a3-a72b-7361a801fb98" (UID: "1268b979-7e19-49a3-a72b-7361a801fb98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.414481 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.414536 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1268b979-7e19-49a3-a72b-7361a801fb98-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.505216 4988 generic.go:334] "Generic (PLEG): container finished" podID="1268b979-7e19-49a3-a72b-7361a801fb98" containerID="2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7" exitCode=0 Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.505409 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerDied","Data":"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7"} Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.505472 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1268b979-7e19-49a3-a72b-7361a801fb98","Type":"ContainerDied","Data":"7b6cee0185c0c6937f3acee3898c2d5792cd813a58a44e1fe3bfdec57f471c91"} Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.505474 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.505535 4988 scope.go:117] "RemoveContainer" containerID="5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.561838 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.562102 4988 scope.go:117] "RemoveContainer" containerID="379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.579049 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.608488 4988 scope.go:117] "RemoveContainer" containerID="2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.613581 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614205 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="sg-core" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614355 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="sg-core" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614450 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="extract-content" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614511 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="extract-content" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614574 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="registry-server" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614626 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="registry-server" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614693 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-central-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614746 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-central-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614810 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-notification-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614870 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-notification-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.614931 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="extract-utilities" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.614986 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="extract-utilities" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.615068 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="proxy-httpd" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.615179 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="proxy-httpd" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.615444 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="proxy-httpd" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.619703 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" containerName="registry-server" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.619830 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-central-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.619899 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="ceilometer-notification-agent" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.619976 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" containerName="sg-core" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.622799 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.622931 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.626086 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.626238 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.626485 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.651732 4988 scope.go:117] "RemoveContainer" containerID="311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.673959 4988 scope.go:117] "RemoveContainer" containerID="5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.674427 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6\": container with ID starting with 5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6 not found: ID does not exist" containerID="5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.674473 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6"} err="failed to get container status \"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6\": rpc error: code = NotFound desc = could not find container \"5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6\": container with ID starting with 5e545e38fd49dd09bf8c63501ce0ef43b6d225e825f72d7775051b0b78dd9da6 not found: ID does not exist" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.674506 4988 scope.go:117] "RemoveContainer" containerID="379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.674931 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b\": container with ID starting with 379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b not found: ID does not exist" containerID="379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.674963 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b"} err="failed to get container status \"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b\": rpc error: code = NotFound desc = could not find container \"379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b\": container with ID starting with 379fc1928bd7b6276124fe797a40c9da657252fc8365008ca88ea835c489084b not found: ID does not exist" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.674986 4988 scope.go:117] "RemoveContainer" containerID="2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.675387 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7\": container with ID starting with 2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7 not found: ID does not exist" containerID="2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.675409 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7"} err="failed to get container status \"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7\": rpc error: code = NotFound desc = could not find container \"2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7\": container with ID starting with 2e4852e34bdbfc9096d5790ccc54b07aef6f56db2b62f1be4460c71f5b2b62d7 not found: ID does not exist" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.675424 4988 scope.go:117] "RemoveContainer" containerID="311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3" Nov 23 07:09:29 crc kubenswrapper[4988]: E1123 07:09:29.675717 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3\": container with ID starting with 311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3 not found: ID does not exist" containerID="311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.675744 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3"} err="failed to get container status \"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3\": rpc error: code = NotFound desc = could not find container \"311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3\": container with ID starting with 311f4e879d59f7fcfdfe1150a6f8cbae1772e2d0518731cf27f10ba8b9cd0cc3 not found: ID does not exist" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.810630 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.822033 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.822380 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.822884 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.822978 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngl87\" (UniqueName: \"kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.823010 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.823111 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.823153 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.823216 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925077 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925161 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngl87\" (UniqueName: \"kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925180 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925240 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925266 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925294 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925378 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925583 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.925835 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.930686 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.932481 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.935170 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.936174 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.936970 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.938951 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.949377 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:29 crc kubenswrapper[4988]: I1123 07:09:29.955406 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngl87\" (UniqueName: \"kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87\") pod \"ceilometer-0\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " pod="openstack/ceilometer-0" Nov 23 07:09:30 crc kubenswrapper[4988]: I1123 07:09:30.251677 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:30 crc kubenswrapper[4988]: I1123 07:09:30.510087 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1268b979-7e19-49a3-a72b-7361a801fb98" path="/var/lib/kubelet/pods/1268b979-7e19-49a3-a72b-7361a801fb98/volumes" Nov 23 07:09:30 crc kubenswrapper[4988]: I1123 07:09:30.707479 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.060692 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.150894 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs\") pod \"2290f52c-4faa-402c-b432-e2b0626006d1\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.151062 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhfs\" (UniqueName: \"kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs\") pod \"2290f52c-4faa-402c-b432-e2b0626006d1\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.151105 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data\") pod \"2290f52c-4faa-402c-b432-e2b0626006d1\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.151142 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle\") pod \"2290f52c-4faa-402c-b432-e2b0626006d1\" (UID: \"2290f52c-4faa-402c-b432-e2b0626006d1\") " Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.151426 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs" (OuterVolumeSpecName: "logs") pod "2290f52c-4faa-402c-b432-e2b0626006d1" (UID: "2290f52c-4faa-402c-b432-e2b0626006d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.151765 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2290f52c-4faa-402c-b432-e2b0626006d1-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.161137 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs" (OuterVolumeSpecName: "kube-api-access-tfhfs") pod "2290f52c-4faa-402c-b432-e2b0626006d1" (UID: "2290f52c-4faa-402c-b432-e2b0626006d1"). InnerVolumeSpecName "kube-api-access-tfhfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.193423 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data" (OuterVolumeSpecName: "config-data") pod "2290f52c-4faa-402c-b432-e2b0626006d1" (UID: "2290f52c-4faa-402c-b432-e2b0626006d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.200646 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2290f52c-4faa-402c-b432-e2b0626006d1" (UID: "2290f52c-4faa-402c-b432-e2b0626006d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.253211 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhfs\" (UniqueName: \"kubernetes.io/projected/2290f52c-4faa-402c-b432-e2b0626006d1-kube-api-access-tfhfs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.253500 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.253512 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2290f52c-4faa-402c-b432-e2b0626006d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.529353 4988 generic.go:334] "Generic (PLEG): container finished" podID="2290f52c-4faa-402c-b432-e2b0626006d1" containerID="4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026" exitCode=0 Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.529411 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.529478 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerDied","Data":"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026"} Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.529561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2290f52c-4faa-402c-b432-e2b0626006d1","Type":"ContainerDied","Data":"2cb87e786be41527950d4666a38c8711873fdf39cc80f618c534f75c1a9699b8"} Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.529606 4988 scope.go:117] "RemoveContainer" containerID="4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.533070 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerStarted","Data":"0f03ec543429a626c8d33b783b6684da955e2b3df62fa5e03977931c6cff0b9b"} Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.533136 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerStarted","Data":"dcbf540137dbef8a8283a5884d19e2bdf8ac75f36eb9b709d3986aaf7ca029b1"} Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.563396 4988 scope.go:117] "RemoveContainer" containerID="20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.574565 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.586870 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.592830 4988 scope.go:117] "RemoveContainer" containerID="4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026" Nov 23 07:09:31 crc kubenswrapper[4988]: E1123 07:09:31.603374 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026\": container with ID starting with 4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026 not found: ID does not exist" containerID="4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.603417 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026"} err="failed to get container status \"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026\": rpc error: code = NotFound desc = could not find container \"4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026\": container with ID starting with 4cab43cf013354b5581855fd30475fabf5037b51aaa4546d0f31f14705676026 not found: ID does not exist" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.603444 4988 scope.go:117] "RemoveContainer" containerID="20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd" Nov 23 07:09:31 crc kubenswrapper[4988]: E1123 07:09:31.603893 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd\": container with ID starting with 20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd not found: ID does not exist" containerID="20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.603951 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd"} err="failed to get container status \"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd\": rpc error: code = NotFound desc = could not find container \"20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd\": container with ID starting with 20fb7cd591b7238b7ec9ad55e58ceeeea13932fe89aedcd09ea016e0f69d65fd not found: ID does not exist" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.607647 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:31 crc kubenswrapper[4988]: E1123 07:09:31.608055 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-log" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.608067 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-log" Nov 23 07:09:31 crc kubenswrapper[4988]: E1123 07:09:31.608100 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-api" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.608106 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-api" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.610601 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-log" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.610663 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" containerName="nova-api-api" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.616371 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.616863 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.618852 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.619106 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.622861 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.762662 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.762718 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.762983 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.763053 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.763137 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.763338 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmd6v\" (UniqueName: \"kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.865438 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.865815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.865846 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.865915 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmd6v\" (UniqueName: \"kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.866102 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.866130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.866842 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.869928 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.870586 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.873815 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.874071 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.895838 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmd6v\" (UniqueName: \"kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v\") pod \"nova-api-0\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " pod="openstack/nova-api-0" Nov 23 07:09:31 crc kubenswrapper[4988]: I1123 07:09:31.942698 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:32 crc kubenswrapper[4988]: I1123 07:09:32.391396 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:32 crc kubenswrapper[4988]: W1123 07:09:32.403207 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4acb84f_e11f_4ba3_af05_dbd45f9f5e30.slice/crio-2027cceece832d810c43dd6cb63a93e25dc66c3833e654890709ae906ba8d49a WatchSource:0}: Error finding container 2027cceece832d810c43dd6cb63a93e25dc66c3833e654890709ae906ba8d49a: Status 404 returned error can't find the container with id 2027cceece832d810c43dd6cb63a93e25dc66c3833e654890709ae906ba8d49a Nov 23 07:09:32 crc kubenswrapper[4988]: I1123 07:09:32.506734 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2290f52c-4faa-402c-b432-e2b0626006d1" path="/var/lib/kubelet/pods/2290f52c-4faa-402c-b432-e2b0626006d1/volumes" Nov 23 07:09:32 crc kubenswrapper[4988]: I1123 07:09:32.544644 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerStarted","Data":"2027cceece832d810c43dd6cb63a93e25dc66c3833e654890709ae906ba8d49a"} Nov 23 07:09:32 crc kubenswrapper[4988]: I1123 07:09:32.547678 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerStarted","Data":"c03076932f1bd8fe2ee2079c7b8e87ba81b2baeb499ec22610473dfd45ba6936"} Nov 23 07:09:33 crc kubenswrapper[4988]: I1123 07:09:33.563661 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerStarted","Data":"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417"} Nov 23 07:09:33 crc kubenswrapper[4988]: I1123 07:09:33.564215 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerStarted","Data":"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c"} Nov 23 07:09:33 crc kubenswrapper[4988]: I1123 07:09:33.570018 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerStarted","Data":"bb50deda847e5c4d01192688b630e90316d62e3b678ef1a62e6da9a8a390be52"} Nov 23 07:09:33 crc kubenswrapper[4988]: I1123 07:09:33.599772 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.599746356 podStartE2EDuration="2.599746356s" podCreationTimestamp="2025-11-23 07:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:33.591549219 +0000 UTC m=+1425.900062022" watchObservedRunningTime="2025-11-23 07:09:33.599746356 +0000 UTC m=+1425.908259139" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.582714 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerStarted","Data":"543aecdb571138b8617239f7cdae649de1a2ea369419630d777d643c64fc0d98"} Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.582974 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.608601 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.192973234 podStartE2EDuration="5.608579612s" podCreationTimestamp="2025-11-23 07:09:29 +0000 UTC" firstStartedPulling="2025-11-23 07:09:30.723012502 +0000 UTC m=+1423.031525275" lastFinishedPulling="2025-11-23 07:09:34.13861889 +0000 UTC m=+1426.447131653" observedRunningTime="2025-11-23 07:09:34.603158442 +0000 UTC m=+1426.911671215" watchObservedRunningTime="2025-11-23 07:09:34.608579612 +0000 UTC m=+1426.917092375" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.811745 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.834881 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.935819 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:09:34 crc kubenswrapper[4988]: I1123 07:09:34.935885 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.120736 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.199570 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.199805 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="dnsmasq-dns" containerID="cri-o://35fd0ed055ab1700c170fa7a1882e7bdaba65f13e3de0fe4b4b573c72249cd4d" gracePeriod=10 Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.597388 4988 generic.go:334] "Generic (PLEG): container finished" podID="f839d139-cc2b-46cb-b100-d48211ad463c" containerID="35fd0ed055ab1700c170fa7a1882e7bdaba65f13e3de0fe4b4b573c72249cd4d" exitCode=0 Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.597478 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" event={"ID":"f839d139-cc2b-46cb-b100-d48211ad463c","Type":"ContainerDied","Data":"35fd0ed055ab1700c170fa7a1882e7bdaba65f13e3de0fe4b4b573c72249cd4d"} Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.620782 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.811613 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.840092 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mj4c8"] Nov 23 07:09:35 crc kubenswrapper[4988]: E1123 07:09:35.840600 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="dnsmasq-dns" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.840627 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="dnsmasq-dns" Nov 23 07:09:35 crc kubenswrapper[4988]: E1123 07:09:35.840670 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="init" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.840680 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="init" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.841869 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" containerName="dnsmasq-dns" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.843223 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.846677 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.847127 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.873642 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mj4c8"] Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.955372 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.955678 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964031 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964244 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964314 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964587 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-999wd\" (UniqueName: \"kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964641 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.964713 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0\") pod \"f839d139-cc2b-46cb-b100-d48211ad463c\" (UID: \"f839d139-cc2b-46cb-b100-d48211ad463c\") " Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.965623 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzkc\" (UniqueName: \"kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.965746 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.965843 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.966012 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:35 crc kubenswrapper[4988]: I1123 07:09:35.974400 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd" (OuterVolumeSpecName: "kube-api-access-999wd") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "kube-api-access-999wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.024173 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.028793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.035088 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config" (OuterVolumeSpecName: "config") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.038248 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.039955 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f839d139-cc2b-46cb-b100-d48211ad463c" (UID: "f839d139-cc2b-46cb-b100-d48211ad463c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068025 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzkc\" (UniqueName: \"kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068094 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068135 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068217 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068283 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-999wd\" (UniqueName: \"kubernetes.io/projected/f839d139-cc2b-46cb-b100-d48211ad463c-kube-api-access-999wd\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068296 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068305 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068315 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068323 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.068331 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f839d139-cc2b-46cb-b100-d48211ad463c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.072473 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.073544 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.074236 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.088158 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzkc\" (UniqueName: \"kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc\") pod \"nova-cell1-cell-mapping-mj4c8\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.165361 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.610808 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" event={"ID":"f839d139-cc2b-46cb-b100-d48211ad463c","Type":"ContainerDied","Data":"793de2d3c48d4cc2c7ac2c4bcc3cb7dca4e85d25d700d5000313e827ea75eef5"} Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.618484 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-8c2sj" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.618496 4988 scope.go:117] "RemoveContainer" containerID="35fd0ed055ab1700c170fa7a1882e7bdaba65f13e3de0fe4b4b573c72249cd4d" Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.627379 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mj4c8"] Nov 23 07:09:36 crc kubenswrapper[4988]: W1123 07:09:36.629057 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63551f0e_2705_400c_8c56_7c16c25c812d.slice/crio-0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9 WatchSource:0}: Error finding container 0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9: Status 404 returned error can't find the container with id 0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9 Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.652352 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.663568 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-8c2sj"] Nov 23 07:09:36 crc kubenswrapper[4988]: I1123 07:09:36.755945 4988 scope.go:117] "RemoveContainer" containerID="fc8efdc69fbb254927b268c08bfc58c1060cf3383c4b15843278cf21b966b810" Nov 23 07:09:37 crc kubenswrapper[4988]: I1123 07:09:37.620978 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mj4c8" event={"ID":"63551f0e-2705-400c-8c56-7c16c25c812d","Type":"ContainerStarted","Data":"d7a0beeffd4bb592aa4fed3908d8d8d213c8a5810078bc9828345957f014c04e"} Nov 23 07:09:37 crc kubenswrapper[4988]: I1123 07:09:37.621333 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mj4c8" event={"ID":"63551f0e-2705-400c-8c56-7c16c25c812d","Type":"ContainerStarted","Data":"0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9"} Nov 23 07:09:37 crc kubenswrapper[4988]: I1123 07:09:37.659810 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mj4c8" podStartSLOduration=2.65979018 podStartE2EDuration="2.65979018s" podCreationTimestamp="2025-11-23 07:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:37.642984807 +0000 UTC m=+1429.951497570" watchObservedRunningTime="2025-11-23 07:09:37.65979018 +0000 UTC m=+1429.968302953" Nov 23 07:09:38 crc kubenswrapper[4988]: I1123 07:09:38.510872 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f839d139-cc2b-46cb-b100-d48211ad463c" path="/var/lib/kubelet/pods/f839d139-cc2b-46cb-b100-d48211ad463c/volumes" Nov 23 07:09:41 crc kubenswrapper[4988]: I1123 07:09:41.677255 4988 generic.go:334] "Generic (PLEG): container finished" podID="63551f0e-2705-400c-8c56-7c16c25c812d" containerID="d7a0beeffd4bb592aa4fed3908d8d8d213c8a5810078bc9828345957f014c04e" exitCode=0 Nov 23 07:09:41 crc kubenswrapper[4988]: I1123 07:09:41.677366 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mj4c8" event={"ID":"63551f0e-2705-400c-8c56-7c16c25c812d","Type":"ContainerDied","Data":"d7a0beeffd4bb592aa4fed3908d8d8d213c8a5810078bc9828345957f014c04e"} Nov 23 07:09:41 crc kubenswrapper[4988]: I1123 07:09:41.943267 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:09:41 crc kubenswrapper[4988]: I1123 07:09:41.943331 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:09:42 crc kubenswrapper[4988]: I1123 07:09:42.960616 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:42 crc kubenswrapper[4988]: I1123 07:09:42.960650 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.066107 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.114662 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rzkc\" (UniqueName: \"kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc\") pod \"63551f0e-2705-400c-8c56-7c16c25c812d\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.114938 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data\") pod \"63551f0e-2705-400c-8c56-7c16c25c812d\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.114981 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts\") pod \"63551f0e-2705-400c-8c56-7c16c25c812d\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.115837 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle\") pod \"63551f0e-2705-400c-8c56-7c16c25c812d\" (UID: \"63551f0e-2705-400c-8c56-7c16c25c812d\") " Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.120799 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts" (OuterVolumeSpecName: "scripts") pod "63551f0e-2705-400c-8c56-7c16c25c812d" (UID: "63551f0e-2705-400c-8c56-7c16c25c812d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.121167 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc" (OuterVolumeSpecName: "kube-api-access-5rzkc") pod "63551f0e-2705-400c-8c56-7c16c25c812d" (UID: "63551f0e-2705-400c-8c56-7c16c25c812d"). InnerVolumeSpecName "kube-api-access-5rzkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.159505 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63551f0e-2705-400c-8c56-7c16c25c812d" (UID: "63551f0e-2705-400c-8c56-7c16c25c812d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.159604 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data" (OuterVolumeSpecName: "config-data") pod "63551f0e-2705-400c-8c56-7c16c25c812d" (UID: "63551f0e-2705-400c-8c56-7c16c25c812d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.218623 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.218650 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.218659 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63551f0e-2705-400c-8c56-7c16c25c812d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.218671 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rzkc\" (UniqueName: \"kubernetes.io/projected/63551f0e-2705-400c-8c56-7c16c25c812d-kube-api-access-5rzkc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.693211 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mj4c8" event={"ID":"63551f0e-2705-400c-8c56-7c16c25c812d","Type":"ContainerDied","Data":"0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9"} Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.693253 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0282ba633b4e6032c019d625a899edf3a753613a6611a7cd96516a1b6c3a15d9" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.693325 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mj4c8" Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.916024 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.916310 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-log" containerID="cri-o://657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c" gracePeriod=30 Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.916864 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-api" containerID="cri-o://d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417" gracePeriod=30 Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.928035 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.928291 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" containerName="nova-scheduler-scheduler" containerID="cri-o://677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" gracePeriod=30 Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.943381 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.944002 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-log" containerID="cri-o://f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08" gracePeriod=30 Nov 23 07:09:43 crc kubenswrapper[4988]: I1123 07:09:43.944316 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-metadata" containerID="cri-o://1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484" gracePeriod=30 Nov 23 07:09:44 crc kubenswrapper[4988]: E1123 07:09:44.073462 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4acb84f_e11f_4ba3_af05_dbd45f9f5e30.slice/crio-657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa000ae_5fa8_4afd_aa9a_7685cdac3a3b.slice/crio-f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa000ae_5fa8_4afd_aa9a_7685cdac3a3b.slice/crio-conmon-f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4acb84f_e11f_4ba3_af05_dbd45f9f5e30.slice/crio-conmon-657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:09:44 crc kubenswrapper[4988]: E1123 07:09:44.472040 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:09:44 crc kubenswrapper[4988]: E1123 07:09:44.473478 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:09:44 crc kubenswrapper[4988]: E1123 07:09:44.474636 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:09:44 crc kubenswrapper[4988]: E1123 07:09:44.474733 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" containerName="nova-scheduler-scheduler" Nov 23 07:09:44 crc kubenswrapper[4988]: I1123 07:09:44.704656 4988 generic.go:334] "Generic (PLEG): container finished" podID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerID="657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c" exitCode=143 Nov 23 07:09:44 crc kubenswrapper[4988]: I1123 07:09:44.704731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerDied","Data":"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c"} Nov 23 07:09:44 crc kubenswrapper[4988]: I1123 07:09:44.706879 4988 generic.go:334] "Generic (PLEG): container finished" podID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerID="f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08" exitCode=143 Nov 23 07:09:44 crc kubenswrapper[4988]: I1123 07:09:44.706921 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerDied","Data":"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08"} Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.570327 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.601552 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs\") pod \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.601662 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data\") pod \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.601741 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle\") pod \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.601827 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs\") pod \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.601991 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqfsf\" (UniqueName: \"kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf\") pod \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\" (UID: \"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b\") " Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.604498 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs" (OuterVolumeSpecName: "logs") pod "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" (UID: "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.614785 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf" (OuterVolumeSpecName: "kube-api-access-bqfsf") pod "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" (UID: "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b"). InnerVolumeSpecName "kube-api-access-bqfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.666841 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data" (OuterVolumeSpecName: "config-data") pod "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" (UID: "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.680577 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" (UID: "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.693659 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" (UID: "3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.705077 4988 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.705115 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.705273 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.705289 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.705306 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqfsf\" (UniqueName: \"kubernetes.io/projected/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b-kube-api-access-bqfsf\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.742356 4988 generic.go:334] "Generic (PLEG): container finished" podID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerID="1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484" exitCode=0 Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.742397 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerDied","Data":"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484"} Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.742421 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b","Type":"ContainerDied","Data":"63e176a530038a40bd6827631fda715bde9f4d5a6f4be5e35a2bcb77cd39ab74"} Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.742436 4988 scope.go:117] "RemoveContainer" containerID="1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.742562 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.765120 4988 scope.go:117] "RemoveContainer" containerID="f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.784350 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.805255 4988 scope.go:117] "RemoveContainer" containerID="1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484" Nov 23 07:09:47 crc kubenswrapper[4988]: E1123 07:09:47.805647 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484\": container with ID starting with 1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484 not found: ID does not exist" containerID="1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.805683 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484"} err="failed to get container status \"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484\": rpc error: code = NotFound desc = could not find container \"1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484\": container with ID starting with 1da9c77e537f7287c3734aef459ee2695d5b61789dd3e0841f79fcf85f18a484 not found: ID does not exist" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.805712 4988 scope.go:117] "RemoveContainer" containerID="f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08" Nov 23 07:09:47 crc kubenswrapper[4988]: E1123 07:09:47.806023 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08\": container with ID starting with f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08 not found: ID does not exist" containerID="f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.806088 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08"} err="failed to get container status \"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08\": rpc error: code = NotFound desc = could not find container \"f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08\": container with ID starting with f3c29a954be27503731bad69ac222e5f916dbef67f7918b63ee8d4f72bcf3d08 not found: ID does not exist" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.812255 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.820878 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:47 crc kubenswrapper[4988]: E1123 07:09:47.822311 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63551f0e-2705-400c-8c56-7c16c25c812d" containerName="nova-manage" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822331 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="63551f0e-2705-400c-8c56-7c16c25c812d" containerName="nova-manage" Nov 23 07:09:47 crc kubenswrapper[4988]: E1123 07:09:47.822371 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-log" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822378 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-log" Nov 23 07:09:47 crc kubenswrapper[4988]: E1123 07:09:47.822391 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-metadata" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822397 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-metadata" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822579 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="63551f0e-2705-400c-8c56-7c16c25c812d" containerName="nova-manage" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822598 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-log" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.822610 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" containerName="nova-metadata-metadata" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.823689 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.829911 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.831713 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.841745 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.913600 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.913732 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm7xk\" (UniqueName: \"kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.913779 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.913816 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:47 crc kubenswrapper[4988]: I1123 07:09:47.913851 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.015966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.016279 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm7xk\" (UniqueName: \"kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.016315 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.016352 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.016379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.016666 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.023066 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.023457 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.024275 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.030605 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm7xk\" (UniqueName: \"kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk\") pod \"nova-metadata-0\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.166251 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.512841 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b" path="/var/lib/kubelet/pods/3aa000ae-5fa8-4afd-aa9a-7685cdac3a3b/volumes" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.611172 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.724072 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.730673 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data\") pod \"330518a2-2989-41d3-9a9f-620205d70a7a\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.730896 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle\") pod \"330518a2-2989-41d3-9a9f-620205d70a7a\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.730953 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wncr\" (UniqueName: \"kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr\") pod \"330518a2-2989-41d3-9a9f-620205d70a7a\" (UID: \"330518a2-2989-41d3-9a9f-620205d70a7a\") " Nov 23 07:09:48 crc kubenswrapper[4988]: W1123 07:09:48.733848 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a28a2bd_cf03_47d7_b142_63b066fdeb42.slice/crio-4dbea90f64bb68ab234178a7ad9e71acb72502e419e98844c6dffb29010994aa WatchSource:0}: Error finding container 4dbea90f64bb68ab234178a7ad9e71acb72502e419e98844c6dffb29010994aa: Status 404 returned error can't find the container with id 4dbea90f64bb68ab234178a7ad9e71acb72502e419e98844c6dffb29010994aa Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.736707 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr" (OuterVolumeSpecName: "kube-api-access-7wncr") pod "330518a2-2989-41d3-9a9f-620205d70a7a" (UID: "330518a2-2989-41d3-9a9f-620205d70a7a"). InnerVolumeSpecName "kube-api-access-7wncr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.755475 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.760447 4988 generic.go:334] "Generic (PLEG): container finished" podID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerID="d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417" exitCode=0 Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.760533 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerDied","Data":"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417"} Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.760562 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30","Type":"ContainerDied","Data":"2027cceece832d810c43dd6cb63a93e25dc66c3833e654890709ae906ba8d49a"} Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.760580 4988 scope.go:117] "RemoveContainer" containerID="d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.764632 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "330518a2-2989-41d3-9a9f-620205d70a7a" (UID: "330518a2-2989-41d3-9a9f-620205d70a7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.768932 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerStarted","Data":"4dbea90f64bb68ab234178a7ad9e71acb72502e419e98844c6dffb29010994aa"} Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.770076 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data" (OuterVolumeSpecName: "config-data") pod "330518a2-2989-41d3-9a9f-620205d70a7a" (UID: "330518a2-2989-41d3-9a9f-620205d70a7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.770536 4988 generic.go:334] "Generic (PLEG): container finished" podID="330518a2-2989-41d3-9a9f-620205d70a7a" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" exitCode=0 Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.770573 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"330518a2-2989-41d3-9a9f-620205d70a7a","Type":"ContainerDied","Data":"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1"} Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.770594 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"330518a2-2989-41d3-9a9f-620205d70a7a","Type":"ContainerDied","Data":"af0cfd2c56dbb6d7be720fffffa323cea5e6e93ac84be71b5292c1cfe6482e8c"} Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.770650 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.796225 4988 scope.go:117] "RemoveContainer" containerID="657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.833910 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmd6v\" (UniqueName: \"kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.833979 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.833986 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.834009 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.834130 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.834160 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.834864 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs" (OuterVolumeSpecName: "logs") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.835692 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs\") pod \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\" (UID: \"d4acb84f-e11f-4ba3-af05-dbd45f9f5e30\") " Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.836427 4988 scope.go:117] "RemoveContainer" containerID="d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.836471 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wncr\" (UniqueName: \"kubernetes.io/projected/330518a2-2989-41d3-9a9f-620205d70a7a-kube-api-access-7wncr\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.836529 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.836541 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.836550 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330518a2-2989-41d3-9a9f-620205d70a7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.841248 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v" (OuterVolumeSpecName: "kube-api-access-mmd6v") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "kube-api-access-mmd6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.845008 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417\": container with ID starting with d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417 not found: ID does not exist" containerID="d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.845116 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417"} err="failed to get container status \"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417\": rpc error: code = NotFound desc = could not find container \"d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417\": container with ID starting with d95b5b9b66608789ad1300dd0c970a26ac80bf474c64a3e95d14a5a36c42b417 not found: ID does not exist" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.861044 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.861225 4988 scope.go:117] "RemoveContainer" containerID="657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c" Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.862027 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c\": container with ID starting with 657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c not found: ID does not exist" containerID="657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.862071 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c"} err="failed to get container status \"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c\": rpc error: code = NotFound desc = could not find container \"657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c\": container with ID starting with 657eca18fa1bfc3176994aebe2e576dd69c744a9268445a70602b1063508e57c not found: ID does not exist" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.862097 4988 scope.go:117] "RemoveContainer" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.887454 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.887980 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-api" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888001 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-api" Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.888018 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" containerName="nova-scheduler-scheduler" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888027 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" containerName="nova-scheduler-scheduler" Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.888058 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-log" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888065 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-log" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888282 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-log" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888305 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" containerName="nova-api-api" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888317 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" containerName="nova-scheduler-scheduler" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.888985 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.891300 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.909511 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.909567 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.909641 4988 scope.go:117] "RemoveContainer" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" Nov 23 07:09:48 crc kubenswrapper[4988]: E1123 07:09:48.910522 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1\": container with ID starting with 677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1 not found: ID does not exist" containerID="677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.910557 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1"} err="failed to get container status \"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1\": rpc error: code = NotFound desc = could not find container \"677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1\": container with ID starting with 677e31b246d04f9200b1be68f7d5069c1e4705dad224c40ed6d67b7564ecd3b1 not found: ID does not exist" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.922925 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data" (OuterVolumeSpecName: "config-data") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938489 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7wq\" (UniqueName: \"kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938543 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938586 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938722 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmd6v\" (UniqueName: \"kubernetes.io/projected/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-kube-api-access-mmd6v\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938733 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.938742 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.941308 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:48 crc kubenswrapper[4988]: I1123 07:09:48.950321 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" (UID: "d4acb84f-e11f-4ba3-af05-dbd45f9f5e30"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.040765 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7wq\" (UniqueName: \"kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.040839 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.040893 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.041051 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.041069 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.044909 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.048641 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.071573 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7wq\" (UniqueName: \"kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq\") pod \"nova-scheduler-0\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.223978 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.714583 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.780812 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.791655 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerStarted","Data":"d2b1f46c3d98eca77fce5ba02072ca8a40d4ce3218cb93220d983e5effd1b750"} Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.791763 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerStarted","Data":"dcd0354caf733195781e1f948fd33c5f325bbd542de067c894db308d568445d1"} Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.797063 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89362c9c-bf2d-4e66-8ac3-7b288262b3d8","Type":"ContainerStarted","Data":"ddc57bef8c9e208f81125bbac64a7b4441a6cc793a2bf1f5d1750400a17a4753"} Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.828863 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8288420480000003 podStartE2EDuration="2.828842048s" podCreationTimestamp="2025-11-23 07:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:49.812965117 +0000 UTC m=+1442.121477940" watchObservedRunningTime="2025-11-23 07:09:49.828842048 +0000 UTC m=+1442.137354831" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.894673 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.905037 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.915129 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.917125 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.921877 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.921930 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.922795 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.941472 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962337 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962407 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962429 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzz4\" (UniqueName: \"kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962490 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962521 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:49 crc kubenswrapper[4988]: I1123 07:09:49.962539 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.064817 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.065544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzz4\" (UniqueName: \"kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.065757 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.065953 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.066963 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.069101 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.070128 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.070311 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.070603 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.074064 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.074640 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.091501 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzz4\" (UniqueName: \"kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4\") pod \"nova-api-0\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.250871 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.506547 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330518a2-2989-41d3-9a9f-620205d70a7a" path="/var/lib/kubelet/pods/330518a2-2989-41d3-9a9f-620205d70a7a/volumes" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.507571 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4acb84f-e11f-4ba3-af05-dbd45f9f5e30" path="/var/lib/kubelet/pods/d4acb84f-e11f-4ba3-af05-dbd45f9f5e30/volumes" Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.731288 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:09:50 crc kubenswrapper[4988]: W1123 07:09:50.731436 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612dbf27_0967_4833_a62f_c86a008fe257.slice/crio-6d4a82952ef89bf6a772c7a8d6db803889cc71e5960e06c24c1c93e959c39980 WatchSource:0}: Error finding container 6d4a82952ef89bf6a772c7a8d6db803889cc71e5960e06c24c1c93e959c39980: Status 404 returned error can't find the container with id 6d4a82952ef89bf6a772c7a8d6db803889cc71e5960e06c24c1c93e959c39980 Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.806031 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerStarted","Data":"6d4a82952ef89bf6a772c7a8d6db803889cc71e5960e06c24c1c93e959c39980"} Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.810234 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89362c9c-bf2d-4e66-8ac3-7b288262b3d8","Type":"ContainerStarted","Data":"039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b"} Nov 23 07:09:50 crc kubenswrapper[4988]: I1123 07:09:50.832651 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.832628423 podStartE2EDuration="2.832628423s" podCreationTimestamp="2025-11-23 07:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:50.823405342 +0000 UTC m=+1443.131918115" watchObservedRunningTime="2025-11-23 07:09:50.832628423 +0000 UTC m=+1443.141141186" Nov 23 07:09:51 crc kubenswrapper[4988]: I1123 07:09:51.826771 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerStarted","Data":"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0"} Nov 23 07:09:51 crc kubenswrapper[4988]: I1123 07:09:51.827643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerStarted","Data":"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a"} Nov 23 07:09:51 crc kubenswrapper[4988]: I1123 07:09:51.848059 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.848042585 podStartE2EDuration="2.848042585s" podCreationTimestamp="2025-11-23 07:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:51.844041169 +0000 UTC m=+1444.152553952" watchObservedRunningTime="2025-11-23 07:09:51.848042585 +0000 UTC m=+1444.156555348" Nov 23 07:09:53 crc kubenswrapper[4988]: I1123 07:09:53.166732 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:09:53 crc kubenswrapper[4988]: I1123 07:09:53.166817 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:09:54 crc kubenswrapper[4988]: I1123 07:09:54.224298 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:09:56 crc kubenswrapper[4988]: I1123 07:09:56.633775 4988 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podc27d856c-9f29-4c96-a825-7d8c2d7f151a"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podc27d856c-9f29-4c96-a825-7d8c2d7f151a] : Timed out while waiting for systemd to remove kubepods-burstable-podc27d856c_9f29_4c96_a825_7d8c2d7f151a.slice" Nov 23 07:09:56 crc kubenswrapper[4988]: E1123 07:09:56.633950 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable podc27d856c-9f29-4c96-a825-7d8c2d7f151a] : unable to destroy cgroup paths for cgroup [kubepods burstable podc27d856c-9f29-4c96-a825-7d8c2d7f151a] : Timed out while waiting for systemd to remove kubepods-burstable-podc27d856c_9f29_4c96_a825_7d8c2d7f151a.slice" pod="openshift-marketplace/redhat-marketplace-v9shd" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" Nov 23 07:09:56 crc kubenswrapper[4988]: I1123 07:09:56.874572 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v9shd" Nov 23 07:09:56 crc kubenswrapper[4988]: I1123 07:09:56.934294 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:56 crc kubenswrapper[4988]: I1123 07:09:56.946205 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v9shd"] Nov 23 07:09:58 crc kubenswrapper[4988]: I1123 07:09:58.166788 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:09:58 crc kubenswrapper[4988]: I1123 07:09:58.166863 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:09:58 crc kubenswrapper[4988]: I1123 07:09:58.509707 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27d856c-9f29-4c96-a825-7d8c2d7f151a" path="/var/lib/kubelet/pods/c27d856c-9f29-4c96-a825-7d8c2d7f151a/volumes" Nov 23 07:09:59 crc kubenswrapper[4988]: I1123 07:09:59.193478 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:59 crc kubenswrapper[4988]: I1123 07:09:59.193487 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:09:59 crc kubenswrapper[4988]: I1123 07:09:59.224383 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:09:59 crc kubenswrapper[4988]: I1123 07:09:59.258907 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:09:59 crc kubenswrapper[4988]: I1123 07:09:59.936152 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:10:00 crc kubenswrapper[4988]: I1123 07:10:00.251582 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:10:00 crc kubenswrapper[4988]: I1123 07:10:00.251644 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:10:00 crc kubenswrapper[4988]: I1123 07:10:00.260871 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:10:01 crc kubenswrapper[4988]: I1123 07:10:01.256473 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:10:01 crc kubenswrapper[4988]: I1123 07:10:01.297459 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:10:08 crc kubenswrapper[4988]: I1123 07:10:08.173623 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:10:08 crc kubenswrapper[4988]: I1123 07:10:08.174141 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:10:08 crc kubenswrapper[4988]: I1123 07:10:08.180599 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:10:08 crc kubenswrapper[4988]: I1123 07:10:08.185464 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.263365 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.264060 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.264680 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.264751 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.276688 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:10:10 crc kubenswrapper[4988]: I1123 07:10:10.277828 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.623258 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.623904 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" containerName="openstackclient" containerID="cri-o://a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9" gracePeriod=2 Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.649703 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.720802 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.721354 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="openstack-network-exporter" containerID="cri-o://96a7d026dbf741ef763cbef53b41d738422ff89aae829eedde4d6c6dc76818d6" gracePeriod=300 Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.773615 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder80d1-account-delete-sq4dt"] Nov 23 07:10:30 crc kubenswrapper[4988]: E1123 07:10:30.774021 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" containerName="openstackclient" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.774034 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" containerName="openstackclient" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.774284 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" containerName="openstackclient" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.774911 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.797035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.797128 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vltj\" (UniqueName: \"kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.817180 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement2c17-account-delete-ps9t4"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.818438 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.853260 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder80d1-account-delete-sq4dt"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.861777 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="ovsdbserver-sb" containerID="cri-o://b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" gracePeriod=300 Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.877474 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.898832 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5znj\" (UniqueName: \"kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.898899 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.898956 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.899022 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vltj\" (UniqueName: \"kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.900047 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.925602 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement2c17-account-delete-ps9t4"] Nov 23 07:10:30 crc kubenswrapper[4988]: I1123 07:10:30.933865 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vltj\" (UniqueName: \"kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj\") pod \"cinder80d1-account-delete-sq4dt\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.001917 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5znj\" (UniqueName: \"kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.002025 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.002183 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.002249 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data podName:692be1c8-4d8f-4676-89df-19f82b43f043 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:31.502231265 +0000 UTC m=+1483.810744028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data") pod "rabbitmq-cell1-server-0" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.003944 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.034773 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5znj\" (UniqueName: \"kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj\") pod \"placement2c17-account-delete-ps9t4\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.043624 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapicb35-account-delete-wsxgk"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.044927 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.064485 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapicb35-account-delete-wsxgk"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.106343 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.123313 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.125572 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.130945 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.198941 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.204740 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsrv\" (UniqueName: \"kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.204841 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.258277 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.259853 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.272765 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.297620 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c11163ee-e1e7-47a7-a454-610a8b27542f/ovsdbserver-sb/0.log" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.297684 4988 generic.go:334] "Generic (PLEG): container finished" podID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerID="96a7d026dbf741ef763cbef53b41d738422ff89aae829eedde4d6c6dc76818d6" exitCode=2 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.297712 4988 generic.go:334] "Generic (PLEG): container finished" podID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerID="b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" exitCode=143 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.297947 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerDied","Data":"96a7d026dbf741ef763cbef53b41d738422ff89aae829eedde4d6c6dc76818d6"} Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.298016 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerDied","Data":"b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb"} Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.314388 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsrv\" (UniqueName: \"kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.314463 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.314510 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.314571 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tlv\" (UniqueName: \"kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.316414 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.320081 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4mmc7"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.354950 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsrv\" (UniqueName: \"kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv\") pod \"novaapicb35-account-delete-wsxgk\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.399930 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4mmc7"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.429538 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tlv\" (UniqueName: \"kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.429600 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.429641 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8gh\" (UniqueName: \"kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.429702 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.436671 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.438968 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.470879 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tlv\" (UniqueName: \"kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv\") pod \"novacell05c20-account-delete-fvdvh\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.516065 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.529186 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.530798 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.532231 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.532969 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.533026 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8gh\" (UniqueName: \"kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.533125 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.533185 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data podName:692be1c8-4d8f-4676-89df-19f82b43f043 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:32.533167799 +0000 UTC m=+1484.841680562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data") pod "rabbitmq-cell1-server-0" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.543247 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.543517 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" containerID="cri-o://a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" gracePeriod=30 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.543665 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="openstack-network-exporter" containerID="cri-o://5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749" gracePeriod=30 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.560975 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8gh\" (UniqueName: \"kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh\") pod \"glance23d0-account-delete-g2lxz\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.572496 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.587909 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.594367 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.602602 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.624315 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.648252 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx24t\" (UniqueName: \"kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.648325 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.648352 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.648383 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data podName:0b12d6f8-ea7a-4a60-b459-11563683791d nodeName:}" failed. No retries permitted until 2025-11-23 07:10:32.148365202 +0000 UTC m=+1484.456877955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data") pod "rabbitmq-server-0" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d") : configmap "rabbitmq-config-data" not found Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.656604 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.657123 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.681099 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.682674 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-rmdh8" podUID="2afd4c0a-59f2-4313-a198-4e0e8255f163" containerName="openstack-network-exporter" containerID="cri-o://ffefcfe13aa5e78958f29cba26a0887e5de7476e1ca25781589f55aaf4d027c2" gracePeriod=30 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.738331 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.738554 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="dnsmasq-dns" containerID="cri-o://d51442f2112dafeba9e3beeda4d0051ee6813f10a3f4230f01f59b8bc141e8ed" gracePeriod=10 Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.758006 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx24t\" (UniqueName: \"kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.758119 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.758150 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94jl\" (UniqueName: \"kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.758958 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.760702 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.761333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.767005 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j9qhs"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.779253 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j9qhs"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.793001 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-f8vvb"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.809449 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-f8vvb"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.810025 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx24t\" (UniqueName: \"kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t\") pod \"neutron21b3-account-delete-p2f8j\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.819631 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mj4c8"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.837256 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mj4c8"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.860024 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d94jl\" (UniqueName: \"kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.860099 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.861831 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.883084 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.886311 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb is running failed: container process not found" containerID="b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.891347 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb is running failed: container process not found" containerID="b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.901067 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb is running failed: container process not found" containerID="b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:10:31 crc kubenswrapper[4988]: E1123 07:10:31.901134 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="ovsdbserver-sb" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.904804 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d94jl\" (UniqueName: \"kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl\") pod \"barbicanbfce-account-delete-mthln\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.932042 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bprtz"] Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.958355 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:31 crc kubenswrapper[4988]: I1123 07:10:31.963572 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bprtz"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.007284 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-mzqm8"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.037838 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-mzqm8"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.046895 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-c4zpx"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.057268 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-c4zpx"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.068337 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.068566 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="cinder-scheduler" containerID="cri-o://03e37d124318dbc7bdae86e68e8a56352fe7f075c2540608bb85d91aa3b8f04d" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.068989 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="probe" containerID="cri-o://df780d567dc75d2a747323c194d8edc1f1c8620703d8cf00e07d978a89c64cd2" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.088652 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder80d1-account-delete-sq4dt"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.171245 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-26h6n"] Nov 23 07:10:32 crc kubenswrapper[4988]: E1123 07:10:32.172542 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:10:32 crc kubenswrapper[4988]: E1123 07:10:32.172609 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data podName:0b12d6f8-ea7a-4a60-b459-11563683791d nodeName:}" failed. No retries permitted until 2025-11-23 07:10:33.172579824 +0000 UTC m=+1485.481092587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data") pod "rabbitmq-server-0" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d") : configmap "rabbitmq-config-data" not found Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.212099 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-26h6n"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.219343 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.219653 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api-log" containerID="cri-o://0528954f5da33c5e64f1f55abc59161faf590954a1985c469ea4c1f06355f574" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.220787 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api" containerID="cri-o://70c5ffb584b9bbe5c2b209c28d137387ef7c662311797b190ca13786ca38138a" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.250369 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.250912 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-server" containerID="cri-o://658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251373 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-server" containerID="cri-o://44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251794 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="swift-recon-cron" containerID="cri-o://f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251811 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="rsync" containerID="cri-o://e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251822 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-expirer" containerID="cri-o://cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251831 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-updater" containerID="cri-o://a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251842 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-auditor" containerID="cri-o://6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251859 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-replicator" containerID="cri-o://ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251872 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-server" containerID="cri-o://19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251883 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-updater" containerID="cri-o://7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251894 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-auditor" containerID="cri-o://46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251904 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-replicator" containerID="cri-o://b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251914 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-auditor" containerID="cri-o://bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.251948 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-reaper" containerID="cri-o://00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.255882 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-replicator" containerID="cri-o://5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.286299 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.287066 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="openstack-network-exporter" containerID="cri-o://68466f03cc012e80f4cfdb29fa67746fe3b1696571d0bc999ac0bb9f1d9506e5" gracePeriod=300 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.318060 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.318297 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55cfdd5f8d-kn94x" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-log" containerID="cri-o://db7496dcd1faf49f05c99168f0af122bee0300bc60d1e286d3f55f6eb98a7498" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.318636 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55cfdd5f8d-kn94x" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-api" containerID="cri-o://c3aa642238bf2e182f6aa8b168ebe96dd9671c5155884f7a97375c632ebe4f02" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.337289 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.337520 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-log" containerID="cri-o://a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.337626 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-api" containerID="cri-o://fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.366847 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.429646 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m6l8j"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.449983 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m6l8j"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.469640 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.470028 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" containerID="cri-o://dcd0354caf733195781e1f948fd33c5f325bbd542de067c894db308d568445d1" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.470740 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" containerID="cri-o://d2b1f46c3d98eca77fce5ba02072ca8a40d4ce3218cb93220d983e5effd1b750" gracePeriod=30 Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.493501 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-98a8-account-create-79sxz"] Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.532863 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0965fd34-7f35-496d-82c2-ad7a4cfb0d63" path="/var/lib/kubelet/pods/0965fd34-7f35-496d-82c2-ad7a4cfb0d63/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.533938 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f2fc5f1-4980-49de-8ff7-981bb9f4966c" path="/var/lib/kubelet/pods/5f2fc5f1-4980-49de-8ff7-981bb9f4966c/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.534962 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63551f0e-2705-400c-8c56-7c16c25c812d" path="/var/lib/kubelet/pods/63551f0e-2705-400c-8c56-7c16c25c812d/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.535658 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8492667f-e261-4214-8c00-d2271167976e" path="/var/lib/kubelet/pods/8492667f-e261-4214-8c00-d2271167976e/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.536556 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b8327c-fa6e-40b7-984e-c819d78da49b" path="/var/lib/kubelet/pods/95b8327c-fa6e-40b7-984e-c819d78da49b/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.539128 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98aed9f5-ae61-4e6e-bd79-0dbc90fedf61" path="/var/lib/kubelet/pods/98aed9f5-ae61-4e6e-bd79-0dbc90fedf61/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.540110 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa607400-86cc-43bd-ac9a-da02dc37dff7" path="/var/lib/kubelet/pods/aa607400-86cc-43bd-ac9a-da02dc37dff7/volumes" Nov 23 07:10:32 crc kubenswrapper[4988]: I1123 07:10:32.542156 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5e9b0b-3032-4132-9dbe-d12bd89466f0" path="/var/lib/kubelet/pods/bd5e9b0b-3032-4132-9dbe-d12bd89466f0/volumes" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.210598 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c11163ee-e1e7-47a7-a454-610a8b27542f/ovsdbserver-sb/0.log" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.239527 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f85c9f-2478-4293-85cb-17eccd6f262c" path="/var/lib/kubelet/pods/d9f85c9f-2478-4293-85cb-17eccd6f262c/volumes" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.240207 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-98a8-account-create-79sxz"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.240239 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c11163ee-e1e7-47a7-a454-610a8b27542f","Type":"ContainerDied","Data":"b085d4fb1ef64a900c73a1423110863ef3bde1a7f93da2bc209c52957335034c"} Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.240257 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b085d4fb1ef64a900c73a1423110863ef3bde1a7f93da2bc209c52957335034c" Nov 23 07:10:33 crc kubenswrapper[4988]: E1123 07:10:33.243925 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:10:33 crc kubenswrapper[4988]: E1123 07:10:33.243983 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data podName:0b12d6f8-ea7a-4a60-b459-11563683791d nodeName:}" failed. No retries permitted until 2025-11-23 07:10:35.24396918 +0000 UTC m=+1487.552481943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data") pod "rabbitmq-server-0" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d") : configmap "rabbitmq-config-data" not found Nov 23 07:10:33 crc kubenswrapper[4988]: E1123 07:10:33.268840 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:33 crc kubenswrapper[4988]: E1123 07:10:33.269427 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data podName:692be1c8-4d8f-4676-89df-19f82b43f043 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:35.269223676 +0000 UTC m=+1487.577736439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data") pod "rabbitmq-cell1-server-0" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.300788 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c11163ee-e1e7-47a7-a454-610a8b27542f/ovsdbserver-sb/0.log" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.300889 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.310879 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.311946 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-log" containerID="cri-o://3efddc33cd9ce7c30bcaaa3df7dc5f188157ba378b301c9729e7ab2b1c6e2333" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.312052 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-httpd" containerID="cri-o://482947669a01cf95a71f90baa22b06aeac92eb3bd55c440708e11d5e72e1f4ca" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.323424 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rmdh8_2afd4c0a-59f2-4313-a198-4e0e8255f163/openstack-network-exporter/0.log" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.323490 4988 generic.go:334] "Generic (PLEG): container finished" podID="2afd4c0a-59f2-4313-a198-4e0e8255f163" containerID="ffefcfe13aa5e78958f29cba26a0887e5de7476e1ca25781589f55aaf4d027c2" exitCode=2 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.335374 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rmdh8" event={"ID":"2afd4c0a-59f2-4313-a198-4e0e8255f163","Type":"ContainerDied","Data":"ffefcfe13aa5e78958f29cba26a0887e5de7476e1ca25781589f55aaf4d027c2"} Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.408281 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.408556 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68dbd6466f-n6f5g" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-api" containerID="cri-o://9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.408697 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68dbd6466f-n6f5g" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-httpd" containerID="cri-o://79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.469577 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="ovsdbserver-nb" containerID="cri-o://ff17dfc095111f52510f65b355dd4871947190d74f7a84b768c2f07965a73d84" gracePeriod=299 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.475650 4988 generic.go:334] "Generic (PLEG): container finished" podID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerID="d51442f2112dafeba9e3beeda4d0051ee6813f10a3f4230f01f59b8bc141e8ed" exitCode=0 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.480043 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" event={"ID":"75f2198a-7d70-4447-b8c2-62ac40b5c167","Type":"ContainerDied","Data":"d51442f2112dafeba9e3beeda4d0051ee6813f10a3f4230f01f59b8bc141e8ed"} Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.480089 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.480326 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-log" containerID="cri-o://bc100c14d5a403c7cb084cb19a58629ec50c4006b31564afe6806ad8247c5c3d" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.481036 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-httpd" containerID="cri-o://ffc32302cd38e863bb6d6aaea86be25fa12696798a187ea26576d320a9c5ccd3" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.494809 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.494845 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.494908 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.494958 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.495029 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.495112 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.498347 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrj4\" (UniqueName: \"kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.498470 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.498547 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs\") pod \"c11163ee-e1e7-47a7-a454-610a8b27542f\" (UID: \"c11163ee-e1e7-47a7-a454-610a8b27542f\") " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.502665 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config" (OuterVolumeSpecName: "config") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.502748 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.505713 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts" (OuterVolumeSpecName: "scripts") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.514347 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.514584 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://09b29081da7241818cdcc74db9b8d720eb74975763b6c6d467e42525454be55b" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.534284 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.535993 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4" (OuterVolumeSpecName: "kube-api-access-kdrj4") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "kube-api-access-kdrj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.551578 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e680706-1677-4f92-9957-9dd477bbc7be" containerID="5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749" exitCode=2 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.553648 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.553699 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerDied","Data":"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749"} Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.553925 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f7fdc8956-g6vw5" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api-log" containerID="cri-o://8afa739f5110059b465e6885ff5859dc6184950082e63a26a379905cd8929a41" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.554427 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f7fdc8956-g6vw5" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api" containerID="cri-o://9177522bc27ecd27e87dbfaffac6a6f6968557f56ceb08013aef630c895b62fb" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.554882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder80d1-account-delete-sq4dt" event={"ID":"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d","Type":"ContainerStarted","Data":"bc91da899038608454e4eb0590a93b35d538e2132fca26e6aa02ba99e96836a9"} Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.566213 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.566435 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker-log" containerID="cri-o://c20e80b908053c9ad38b943cfa24ecc2c59c1063094c7728511419afd22791ce" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.566541 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker" containerID="cri-o://8d370a258077eef29df07553ebb57bc3f0df94518539e125a0c3eaef83ef1b5b" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.574867 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.607051 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.607086 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.607097 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdrj4\" (UniqueName: \"kubernetes.io/projected/c11163ee-e1e7-47a7-a454-610a8b27542f-kube-api-access-kdrj4\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.607107 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.607115 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c11163ee-e1e7-47a7-a454-610a8b27542f-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.660344 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.660609 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener-log" containerID="cri-o://398b87a138cec8f732064d3f7cb513a557fa9c4ba1deb887d5a4585196f85d30" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.661068 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener" containerID="cri-o://d1dbf13c4c51f91d80504e9813025e575415f2d87f015a192bbdff65a11f6ae1" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.700018 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.711134 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement2c17-account-delete-ps9t4"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.712579 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.728274 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.736529 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.736776 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" containerName="nova-cell1-conductor-conductor" containerID="cri-o://b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: W1123 07:10:33.741255 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52da7e45_da4b_4b22_b4d9_de675091c282.slice/crio-975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd WatchSource:0}: Error finding container 975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd: Status 404 returned error can't find the container with id 975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.751277 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zvkhn"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.756441 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="rabbitmq" containerID="cri-o://70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e" gracePeriod=604800 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.772992 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zvkhn"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.775923 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" containerID="cri-o://901399c306cb37106a8d64f41934670193530a63fe14371cd50172d426d923d6" gracePeriod=604800 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.792399 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.792623 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.813910 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.835160 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.842239 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9z7ns"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.863259 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9z7ns"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.866259 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" probeResult="failure" output=< Nov 23 07:10:33 crc kubenswrapper[4988]: cat: /var/run/openvswitch/ovs-vswitchd.pid: No such file or directory Nov 23 07:10:33 crc kubenswrapper[4988]: ERROR - Failed to get pid for ovs-vswitchd, exit status: 0 Nov 23 07:10:33 crc kubenswrapper[4988]: > Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.874401 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c11163ee-e1e7-47a7-a454-610a8b27542f" (UID: "c11163ee-e1e7-47a7-a454-610a8b27542f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.884905 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" containerID="cri-o://3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" gracePeriod=28 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.887339 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.887556 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" containerID="cri-o://039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" gracePeriod=30 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.896317 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="galera" containerID="cri-o://2449f703f7311cf646d0edc484376bd64bc707c2506b087647d3f70f964b9a7b" gracePeriod=29 Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.908919 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapicb35-account-delete-wsxgk"] Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.917259 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[4988]: I1123 07:10:33.917289 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11163ee-e1e7-47a7-a454-610a8b27542f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.058886 4988 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 23 07:10:34 crc kubenswrapper[4988]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:10:34 crc kubenswrapper[4988]: + source /usr/local/bin/container-scripts/functions Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNBridge=br-int Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNEncapType=geneve Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNAvailabilityZones= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ EnableChassisAsGateway=true Nov 23 07:10:34 crc kubenswrapper[4988]: ++ PhysicalNetworks= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNHostName= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:10:34 crc kubenswrapper[4988]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:10:34 crc kubenswrapper[4988]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + cleanup_ovsdb_server_semaphore Nov 23 07:10:34 crc kubenswrapper[4988]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:10:34 crc kubenswrapper[4988]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-7xsjx" message=< Nov 23 07:10:34 crc kubenswrapper[4988]: Exiting ovsdb-server (5) [ OK ] Nov 23 07:10:34 crc kubenswrapper[4988]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:10:34 crc kubenswrapper[4988]: + source /usr/local/bin/container-scripts/functions Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNBridge=br-int Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNEncapType=geneve Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNAvailabilityZones= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ EnableChassisAsGateway=true Nov 23 07:10:34 crc kubenswrapper[4988]: ++ PhysicalNetworks= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNHostName= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:10:34 crc kubenswrapper[4988]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:10:34 crc kubenswrapper[4988]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + cleanup_ovsdb_server_semaphore Nov 23 07:10:34 crc kubenswrapper[4988]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:10:34 crc kubenswrapper[4988]: > Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.058928 4988 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 23 07:10:34 crc kubenswrapper[4988]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:10:34 crc kubenswrapper[4988]: + source /usr/local/bin/container-scripts/functions Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNBridge=br-int Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNEncapType=geneve Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNAvailabilityZones= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ EnableChassisAsGateway=true Nov 23 07:10:34 crc kubenswrapper[4988]: ++ PhysicalNetworks= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ OVNHostName= Nov 23 07:10:34 crc kubenswrapper[4988]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:10:34 crc kubenswrapper[4988]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:10:34 crc kubenswrapper[4988]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:10:34 crc kubenswrapper[4988]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + sleep 0.5 Nov 23 07:10:34 crc kubenswrapper[4988]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:10:34 crc kubenswrapper[4988]: + cleanup_ovsdb_server_semaphore Nov 23 07:10:34 crc kubenswrapper[4988]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:10:34 crc kubenswrapper[4988]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:10:34 crc kubenswrapper[4988]: > pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" containerID="cri-o://9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.058966 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" containerID="cri-o://9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" gracePeriod=28 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.124739 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rmdh8_2afd4c0a-59f2-4313-a198-4e0e8255f163/openstack-network-exporter/0.log" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.125465 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.156770 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.181936 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.228713 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.233861 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.233895 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.233933 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.234118 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.234152 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwmj\" (UniqueName: \"kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.234250 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config\") pod \"2afd4c0a-59f2-4313-a198-4e0e8255f163\" (UID: \"2afd4c0a-59f2-4313-a198-4e0e8255f163\") " Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.235391 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.235652 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.236112 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.236844 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.236911 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.238672 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config" (OuterVolumeSpecName: "config") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.245409 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.255142 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:34 crc kubenswrapper[4988]: E1123 07:10:34.270020 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.276325 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj" (OuterVolumeSpecName: "kube-api-access-xnwmj") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "kube-api-access-xnwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: W1123 07:10:34.310464 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece4b7bd_2b01_4ad3_8782_a4d7341f0b60.slice/crio-c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0 WatchSource:0}: Error finding container c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0: Status 404 returned error can't find the container with id c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.327442 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.328271 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-httpd" containerID="cri-o://34e7480000062695ebf573eedfae499c0ceef26baec215a617dadbdff949c05f" gracePeriod=30 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.328758 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-server" containerID="cri-o://6c41f5805b7efdab5e67edf42677c7e686ae567b5ac0613407643871e1d427b1" gracePeriod=30 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337031 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337078 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337096 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24nnl\" (UniqueName: \"kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337140 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337278 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337302 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb\") pod \"75f2198a-7d70-4447-b8c2-62ac40b5c167\" (UID: \"75f2198a-7d70-4447-b8c2-62ac40b5c167\") " Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337689 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwmj\" (UniqueName: \"kubernetes.io/projected/2afd4c0a-59f2-4313-a198-4e0e8255f163-kube-api-access-xnwmj\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337700 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afd4c0a-59f2-4313-a198-4e0e8255f163-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337708 4988 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.337716 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2afd4c0a-59f2-4313-a198-4e0e8255f163-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.367732 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl" (OuterVolumeSpecName: "kube-api-access-24nnl") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "kube-api-access-24nnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.439087 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24nnl\" (UniqueName: \"kubernetes.io/projected/75f2198a-7d70-4447-b8c2-62ac40b5c167-kube-api-access-24nnl\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.540151 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8539c348-f366-4d11-862b-a645eaaf4a40" path="/var/lib/kubelet/pods/8539c348-f366-4d11-862b-a645eaaf4a40/volumes" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.540687 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ba06f4-14ba-421e-85ab-f9a593f7c60c" path="/var/lib/kubelet/pods/a1ba06f4-14ba-421e-85ab-f9a593f7c60c/volumes" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.541233 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d192035c-1590-4efb-af88-66e16c8afab7" path="/var/lib/kubelet/pods/d192035c-1590-4efb-af88-66e16c8afab7/volumes" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.546727 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.559402 4988 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.613837 4988 generic.go:334] "Generic (PLEG): container finished" podID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerID="dcd0354caf733195781e1f948fd33c5f325bbd542de067c894db308d568445d1" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.617490 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657743 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657807 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657819 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657829 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657836 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657842 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657848 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657854 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657860 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657868 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657874 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657880 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657887 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.657893 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.660970 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.666623 4988 generic.go:334] "Generic (PLEG): container finished" podID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerID="3efddc33cd9ce7c30bcaaa3df7dc5f188157ba378b301c9729e7ab2b1c6e2333" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.689533 4988 generic.go:334] "Generic (PLEG): container finished" podID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerID="398b87a138cec8f732064d3f7cb513a557fa9c4ba1deb887d5a4585196f85d30" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.691520 4988 generic.go:334] "Generic (PLEG): container finished" podID="61784d29-67cb-4150-923e-0e819bdde923" containerID="8afa739f5110059b465e6885ff5859dc6184950082e63a26a379905cd8929a41" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.691600 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.724797 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.726079 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.735082 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_832df8ad-6b73-46a8-979f-ec3887c49e83/ovsdbserver-nb/0.log" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.735978 4988 generic.go:334] "Generic (PLEG): container finished" podID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerID="68466f03cc012e80f4cfdb29fa67746fe3b1696571d0bc999ac0bb9f1d9506e5" exitCode=2 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.736290 4988 generic.go:334] "Generic (PLEG): container finished" podID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerID="ff17dfc095111f52510f65b355dd4871947190d74f7a84b768c2f07965a73d84" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.755105 4988 generic.go:334] "Generic (PLEG): container finished" podID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerID="bc100c14d5a403c7cb084cb19a58629ec50c4006b31564afe6806ad8247c5c3d" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.756180 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config" (OuterVolumeSpecName: "config") pod "75f2198a-7d70-4447-b8c2-62ac40b5c167" (UID: "75f2198a-7d70-4447-b8c2-62ac40b5c167"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.760429 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-rmdh8_2afd4c0a-59f2-4313-a198-4e0e8255f163/openstack-network-exporter/0.log" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.760540 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-rmdh8" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.764995 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.765019 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.765028 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.765038 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f2198a-7d70-4447-b8c2-62ac40b5c167-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.766132 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2afd4c0a-59f2-4313-a198-4e0e8255f163" (UID: "2afd4c0a-59f2-4313-a198-4e0e8255f163"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.822430 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.834128 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": dial tcp 10.217.0.195:6080: connect: connection refused" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.866748 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afd4c0a-59f2-4313-a198-4e0e8255f163-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.880662 4988 generic.go:334] "Generic (PLEG): container finished" podID="d4be2080-1204-4f6e-ac00-bba757695872" containerID="0528954f5da33c5e64f1f55abc59161faf590954a1985c469ea4c1f06355f574" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.892983 4988 generic.go:334] "Generic (PLEG): container finished" podID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerID="79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.895204 4988 generic.go:334] "Generic (PLEG): container finished" podID="612dbf27-0967-4833-a62f-c86a008fe257" containerID="a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.899672 4988 generic.go:334] "Generic (PLEG): container finished" podID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerID="db7496dcd1faf49f05c99168f0af122bee0300bc60d1e286d3f55f6eb98a7498" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.901343 4988 generic.go:334] "Generic (PLEG): container finished" podID="351d084c-73d8-4965-97c8-407826793cd6" containerID="34e7480000062695ebf573eedfae499c0ceef26baec215a617dadbdff949c05f" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.905665 4988 generic.go:334] "Generic (PLEG): container finished" podID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerID="df780d567dc75d2a747323c194d8edc1f1c8620703d8cf00e07d978a89c64cd2" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.905682 4988 generic.go:334] "Generic (PLEG): container finished" podID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerID="03e37d124318dbc7bdae86e68e8a56352fe7f075c2540608bb85d91aa3b8f04d" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.907171 4988 generic.go:334] "Generic (PLEG): container finished" podID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerID="c20e80b908053c9ad38b943cfa24ecc2c59c1063094c7728511419afd22791ce" exitCode=143 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.916543 4988 generic.go:334] "Generic (PLEG): container finished" podID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" containerID="a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9" exitCode=137 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.920784 4988 generic.go:334] "Generic (PLEG): container finished" podID="618fb238-2a5a-4265-9545-9ccbf016f855" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" exitCode=0 Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.922360 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.968205 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974076 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerDied","Data":"dcd0354caf733195781e1f948fd33c5f325bbd542de067c894db308d568445d1"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974222 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974237 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974248 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974264 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974273 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974282 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974292 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974300 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974308 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974326 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974345 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974353 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974678 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerDied","Data":"3efddc33cd9ce7c30bcaaa3df7dc5f188157ba378b301c9729e7ab2b1c6e2333"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974693 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerDied","Data":"398b87a138cec8f732064d3f7cb513a557fa9c4ba1deb887d5a4585196f85d30"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974704 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerDied","Data":"8afa739f5110059b465e6885ff5859dc6184950082e63a26a379905cd8929a41"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974715 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron21b3-account-delete-p2f8j" event={"ID":"4b82b9a4-6707-446c-abea-2d4a560a43d7","Type":"ContainerStarted","Data":"fadf49234bf96e8e8d34727141627f1b2ae06f923577a71a5a0b18d5245dd48e"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974726 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerDied","Data":"68466f03cc012e80f4cfdb29fa67746fe3b1696571d0bc999ac0bb9f1d9506e5"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974737 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerDied","Data":"ff17dfc095111f52510f65b355dd4871947190d74f7a84b768c2f07965a73d84"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974747 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"832df8ad-6b73-46a8-979f-ec3887c49e83","Type":"ContainerDied","Data":"d26b8143bec5fc36b678b5c84272c5c9a22b7d2e5f2a4cfc0d8b7ab07af17913"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974756 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d26b8143bec5fc36b678b5c84272c5c9a22b7d2e5f2a4cfc0d8b7ab07af17913" Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974765 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerDied","Data":"bc100c14d5a403c7cb084cb19a58629ec50c4006b31564afe6806ad8247c5c3d"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974775 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-rmdh8" event={"ID":"2afd4c0a-59f2-4313-a198-4e0e8255f163","Type":"ContainerDied","Data":"02b550aa387b141edfbfa1c016e664f26bf90d025854ef06c103fa21d674e563"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974788 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement2c17-account-delete-ps9t4" event={"ID":"52da7e45-da4b-4b22-b4d9-de675091c282","Type":"ContainerStarted","Data":"975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974799 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder80d1-account-delete-sq4dt" event={"ID":"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d","Type":"ContainerStarted","Data":"ddf5e557a856c1e8720639a34e6349eb693284d794748d6356d59794c5f7cb6d"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974809 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-vrk6k" event={"ID":"75f2198a-7d70-4447-b8c2-62ac40b5c167","Type":"ContainerDied","Data":"5df1c00ec72f082971ce7793ec3e5aa24db3607de6146fde70e4c08cf4fbf0b6"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974819 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerDied","Data":"0528954f5da33c5e64f1f55abc59161faf590954a1985c469ea4c1f06355f574"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974831 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerDied","Data":"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974840 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerDied","Data":"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974850 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerDied","Data":"db7496dcd1faf49f05c99168f0af122bee0300bc60d1e286d3f55f6eb98a7498"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974860 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerDied","Data":"34e7480000062695ebf573eedfae499c0ceef26baec215a617dadbdff949c05f"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974872 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanbfce-account-delete-mthln" event={"ID":"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410","Type":"ContainerStarted","Data":"857347ca0d00e270378e1d776f5a8e372e8308b59c30a33303161be126a3ed59"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapicb35-account-delete-wsxgk" event={"ID":"610d9cd6-32c2-4a24-a462-df3c8da3f90f","Type":"ContainerStarted","Data":"d84d30bc4ed1d46d7e95bda76d91fc21f8a621040bcb425bfb55f1cf331cdc3b"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974893 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerDied","Data":"df780d567dc75d2a747323c194d8edc1f1c8620703d8cf00e07d978a89c64cd2"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974904 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerDied","Data":"03e37d124318dbc7bdae86e68e8a56352fe7f075c2540608bb85d91aa3b8f04d"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974913 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerDied","Data":"c20e80b908053c9ad38b943cfa24ecc2c59c1063094c7728511419afd22791ce"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974926 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05c20-account-delete-fvdvh" event={"ID":"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60","Type":"ContainerStarted","Data":"c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974937 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerDied","Data":"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974946 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance23d0-account-delete-g2lxz" event={"ID":"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7","Type":"ContainerStarted","Data":"e2b98ead01b43d859c2b773b3b3a8acaa4e7a6e431a6a456a9266454056df44b"} Nov 23 07:10:34 crc kubenswrapper[4988]: I1123 07:10:34.974964 4988 scope.go:117] "RemoveContainer" containerID="ffefcfe13aa5e78958f29cba26a0887e5de7476e1ca25781589f55aaf4d027c2" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.070326 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp\") pod \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.070705 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle\") pod \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.070836 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret\") pod \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.070878 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config\") pod \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\" (UID: \"2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.074656 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp" (OuterVolumeSpecName: "kube-api-access-xpspp") pod "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" (UID: "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4"). InnerVolumeSpecName "kube-api-access-xpspp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.096021 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_832df8ad-6b73-46a8-979f-ec3887c49e83/ovsdbserver-nb/0.log" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.096111 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.106224 4988 scope.go:117] "RemoveContainer" containerID="d51442f2112dafeba9e3beeda4d0051ee6813f10a3f4230f01f59b8bc141e8ed" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.111233 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" (UID: "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.133121 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.135049 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.139902 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.139971 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.171351 4988 scope.go:117] "RemoveContainer" containerID="3d8943009ea79054fa20ce82f04a3cdc3c352ee9f38c84b121658a7640dd9879" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.172719 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.172801 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.172930 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173050 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthpx\" (UniqueName: \"kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173071 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173133 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173184 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173231 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config\") pod \"832df8ad-6b73-46a8-979f-ec3887c49e83\" (UID: \"832df8ad-6b73-46a8-979f-ec3887c49e83\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173757 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.173775 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpspp\" (UniqueName: \"kubernetes.io/projected/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-kube-api-access-xpspp\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.176653 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config" (OuterVolumeSpecName: "config") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.176720 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.199348 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.210747 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts" (OuterVolumeSpecName: "scripts") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.210971 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.212411 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.231302 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.165:8080/healthcheck\": dial tcp 10.217.0.165:8080: connect: connection refused" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.232546 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.165:8080/healthcheck\": dial tcp 10.217.0.165:8080: connect: connection refused" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.236544 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx" (OuterVolumeSpecName: "kube-api-access-tthpx") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "kube-api-access-tthpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.242732 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.253409 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.266142 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-vrk6k"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274289 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274445 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9kj8\" (UniqueName: \"kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274529 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274558 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274579 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274628 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts\") pod \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\" (UID: \"a2f5e1e6-0051-487f-b9ca-76003e7deed1\") " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.274993 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tthpx\" (UniqueName: \"kubernetes.io/projected/832df8ad-6b73-46a8-979f-ec3887c49e83-kube-api-access-tthpx\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.275004 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.275012 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.275029 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.275038 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/832df8ad-6b73-46a8-979f-ec3887c49e83-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.277622 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.277696 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data podName:692be1c8-4d8f-4676-89df-19f82b43f043 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:39.277677335 +0000 UTC m=+1491.586190098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data") pod "rabbitmq-cell1-server-0" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.277836 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.278018 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.278048 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data podName:0b12d6f8-ea7a-4a60-b459-11563683791d nodeName:}" failed. No retries permitted until 2025-11-23 07:10:39.278041014 +0000 UTC m=+1491.586553777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data") pod "rabbitmq-server-0" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d") : configmap "rabbitmq-config-data" not found Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.293332 4988 scope.go:117] "RemoveContainer" containerID="a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.297272 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.299329 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.308852 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8" (OuterVolumeSpecName: "kube-api-access-h9kj8") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "kube-api-access-h9kj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.322561 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-rmdh8"] Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.324169 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts" (OuterVolumeSpecName: "scripts") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.335365 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" (UID: "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.376150 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.376176 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.376186 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.376245 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2f5e1e6-0051-487f-b9ca-76003e7deed1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.376253 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9kj8\" (UniqueName: \"kubernetes.io/projected/a2f5e1e6-0051-487f-b9ca-76003e7deed1-kube-api-access-h9kj8\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.643358 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75 is running failed: container process not found" containerID="b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.644077 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75 is running failed: container process not found" containerID="b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.646626 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75 is running failed: container process not found" containerID="b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:35 crc kubenswrapper[4988]: E1123 07:10:35.646662 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" containerName="nova-cell1-conductor-conductor" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.668635 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.683034 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.694082 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.696340 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.721112 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" (UID: "2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.742912 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.757790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "832df8ad-6b73-46a8-979f-ec3887c49e83" (UID: "832df8ad-6b73-46a8-979f-ec3887c49e83"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.774371 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data" (OuterVolumeSpecName: "config-data") pod "a2f5e1e6-0051-487f-b9ca-76003e7deed1" (UID: "a2f5e1e6-0051-487f-b9ca-76003e7deed1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785544 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785579 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785603 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785611 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2f5e1e6-0051-487f-b9ca-76003e7deed1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785659 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/832df8ad-6b73-46a8-979f-ec3887c49e83-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:35 crc kubenswrapper[4988]: I1123 07:10:35.785668 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.882029 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.882607 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-central-agent" containerID="cri-o://0f03ec543429a626c8d33b783b6684da955e2b3df62fa5e03977931c6cff0b9b" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.883022 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="proxy-httpd" containerID="cri-o://543aecdb571138b8617239f7cdae649de1a2ea369419630d777d643c64fc0d98" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.883063 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="sg-core" containerID="cri-o://bb50deda847e5c4d01192688b630e90316d62e3b678ef1a62e6da9a8a390be52" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.883098 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-notification-agent" containerID="cri-o://c03076932f1bd8fe2ee2079c7b8e87ba81b2baeb499ec22610473dfd45ba6936" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.921725 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.921936 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerName="kube-state-metrics" containerID="cri-o://5ebe52496c238117a9598c27a6bbc10f3777cd5ad280a8dd4625534d34f3fa75" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.951306 4988 generic.go:334] "Generic (PLEG): container finished" podID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerID="2449f703f7311cf646d0edc484376bd64bc707c2506b087647d3f70f964b9a7b" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.951408 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerDied","Data":"2449f703f7311cf646d0edc484376bd64bc707c2506b087647d3f70f964b9a7b"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.951453 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"09c548ca-78f0-4e91-8a5d-dce756b0421e","Type":"ContainerDied","Data":"fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.951463 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fffb9ff92a9e03963faf1e5a3d437029a8fd2d9e4b349c0cf3289d45f30bba22" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.953718 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a2f5e1e6-0051-487f-b9ca-76003e7deed1","Type":"ContainerDied","Data":"574d8a84526d6e5d16672b7ef4cc64cf6eaf4d91ca66342e55524186ddac960b"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.953803 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.967405 4988 generic.go:334] "Generic (PLEG): container finished" podID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerID="09b29081da7241818cdcc74db9b8d720eb74975763b6c6d467e42525454be55b" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.967464 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1","Type":"ContainerDied","Data":"09b29081da7241818cdcc74db9b8d720eb74975763b6c6d467e42525454be55b"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.967495 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1","Type":"ContainerDied","Data":"1357fabbfcd125562c5203f9b929775db08972b449100e02282445af5f753903"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:35.967508 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1357fabbfcd125562c5203f9b929775db08972b449100e02282445af5f753903" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.002640 4988 generic.go:334] "Generic (PLEG): container finished" podID="43d09b31-ee49-498b-bbaf-368e53723f62" containerID="b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.002753 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"43d09b31-ee49-498b-bbaf-368e53723f62","Type":"ContainerDied","Data":"b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.016454 4988 generic.go:334] "Generic (PLEG): container finished" podID="09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" containerID="ddf5e557a856c1e8720639a34e6349eb693284d794748d6356d59794c5f7cb6d" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.016519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder80d1-account-delete-sq4dt" event={"ID":"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d","Type":"ContainerDied","Data":"ddf5e557a856c1e8720639a34e6349eb693284d794748d6356d59794c5f7cb6d"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.027526 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05c20-account-delete-fvdvh" event={"ID":"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60","Type":"ContainerStarted","Data":"66c15cd32a49446bf787535a83ba95c1099387914c033c58e0a73521f56571b7"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.037728 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.037923 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerName="memcached" containerID="cri-o://fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.080560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron21b3-account-delete-p2f8j" event={"ID":"4b82b9a4-6707-446c-abea-2d4a560a43d7","Type":"ContainerStarted","Data":"afb06f3bee648e472f785867007795ba2de84b2e773e3bec1d5d3afef328edb9"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.116800 4988 generic.go:334] "Generic (PLEG): container finished" podID="c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" containerID="01925025abbcd7f5c67405e82f0541a012e5128834a354abd5bed9a4d07eeebd" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.116922 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance23d0-account-delete-g2lxz" event={"ID":"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7","Type":"ContainerDied","Data":"01925025abbcd7f5c67405e82f0541a012e5128834a354abd5bed9a4d07eeebd"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.135612 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-929sx"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.146343 4988 generic.go:334] "Generic (PLEG): container finished" podID="351d084c-73d8-4965-97c8-407826793cd6" containerID="6c41f5805b7efdab5e67edf42677c7e686ae567b5ac0613407643871e1d427b1" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.146407 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerDied","Data":"6c41f5805b7efdab5e67edf42677c7e686ae567b5ac0613407643871e1d427b1"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.146431 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" event={"ID":"351d084c-73d8-4965-97c8-407826793cd6","Type":"ContainerDied","Data":"fc6d1b0bada9dcd1d8cfa49893b8ed0749f5c923315de1a1c048614e299a2410"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.146442 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6d1b0bada9dcd1d8cfa49893b8ed0749f5c923315de1a1c048614e299a2410" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.173691 4988 generic.go:334] "Generic (PLEG): container finished" podID="81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" containerID="3a076926a690b3b95246b9e710f1d3840e5599a0de28fa8b87053fb5acac3d3f" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.174094 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanbfce-account-delete-mthln" event={"ID":"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410","Type":"ContainerDied","Data":"3a076926a690b3b95246b9e710f1d3840e5599a0de28fa8b87053fb5acac3d3f"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.177330 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rs4l6"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.184925 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-929sx"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.197803 4988 generic.go:334] "Generic (PLEG): container finished" podID="52da7e45-da4b-4b22-b4d9-de675091c282" containerID="763109e37e768ab5ddf0aa50bd585326267053ebe8bcb0bd2dabb1628e7c8658" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.197858 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement2c17-account-delete-ps9t4" event={"ID":"52da7e45-da4b-4b22-b4d9-de675091c282","Type":"ContainerDied","Data":"763109e37e768ab5ddf0aa50bd585326267053ebe8bcb0bd2dabb1628e7c8658"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.222957 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rs4l6"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.223871 4988 generic.go:334] "Generic (PLEG): container finished" podID="610d9cd6-32c2-4a24-a462-df3c8da3f90f" containerID="51b62861a02a883b10cbd3bcd5bbbcaf41aa233a576f99e4a22007528ed5f304" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.223954 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.224084 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.225031 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapicb35-account-delete-wsxgk" event={"ID":"610d9cd6-32c2-4a24-a462-df3c8da3f90f","Type":"ContainerDied","Data":"51b62861a02a883b10cbd3bcd5bbbcaf41aa233a576f99e4a22007528ed5f304"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.233776 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.234019 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5c99767b4c-cbdj7" podUID="7bde2362-ff90-47d5-845c-8dfcfe826a61" containerName="keystone-api" containerID="cri-o://562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.259564 4988 scope.go:117] "RemoveContainer" containerID="a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9" Nov 23 07:10:37 crc kubenswrapper[4988]: E1123 07:10:36.286004 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9\": container with ID starting with a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9 not found: ID does not exist" containerID="a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.286037 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9"} err="failed to get container status \"a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9\": rpc error: code = NotFound desc = could not find container \"a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9\": container with ID starting with a1697047c224185850751a3b678a67cde71091a8e6542e369d7e12a0a70c09e9 not found: ID does not exist" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.286060 4988 scope.go:117] "RemoveContainer" containerID="df780d567dc75d2a747323c194d8edc1f1c8620703d8cf00e07d978a89c64cd2" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.299382 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.345720 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6bjf7"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.346304 4988 scope.go:117] "RemoveContainer" containerID="03e37d124318dbc7bdae86e68e8a56352fe7f075c2540608bb85d91aa3b8f04d" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.347482 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.348512 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.349254 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.350205 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.358709 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6bjf7"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.393349 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.414894 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5c20-account-create-ctwp2"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.426038 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5c20-account-create-ctwp2"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.461715 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jps4d"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.475019 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jps4d"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.482752 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1c7a-account-create-sbk64"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.488660 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1c7a-account-create-sbk64"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514777 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514831 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4mv\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514857 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514895 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle\") pod \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514951 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data\") pod \"43d09b31-ee49-498b-bbaf-368e53723f62\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514972 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.514994 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515019 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2v84\" (UniqueName: \"kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515073 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515091 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs\") pod \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515107 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs\") pod \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515122 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data\") pod \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515145 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515173 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515208 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515227 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515243 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgqz4\" (UniqueName: \"kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4\") pod \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\" (UID: \"c4c4a2cd-004d-42ad-bfee-3ec44daff1f1\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515281 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515313 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd\") pod \"351d084c-73d8-4965-97c8-407826793cd6\" (UID: \"351d084c-73d8-4965-97c8-407826793cd6\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515334 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5w2t\" (UniqueName: \"kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t\") pod \"43d09b31-ee49-498b-bbaf-368e53723f62\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515358 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515381 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle\") pod \"43d09b31-ee49-498b-bbaf-368e53723f62\" (UID: \"43d09b31-ee49-498b-bbaf-368e53723f62\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.515409 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts\") pod \"09c548ca-78f0-4e91-8a5d-dce756b0421e\" (UID: \"09c548ca-78f0-4e91-8a5d-dce756b0421e\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.516596 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.516663 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3460da-718e-4daf-b104-a3810d37f437" path="/var/lib/kubelet/pods/0e3460da-718e-4daf-b104-a3810d37f437/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.517319 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.517432 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170939b5-2d04-422e-a463-fa080622257b" path="/var/lib/kubelet/pods/170939b5-2d04-422e-a463-fa080622257b/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.517980 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afd4c0a-59f2-4313-a198-4e0e8255f163" path="/var/lib/kubelet/pods/2afd4c0a-59f2-4313-a198-4e0e8255f163/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.518706 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4" path="/var/lib/kubelet/pods/2d203d3a-7b32-4fa7-ba0b-d2cc36b492c4/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.520760 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402d3c21-bc17-4659-8ed8-cc7bfece6d0a" path="/var/lib/kubelet/pods/402d3c21-bc17-4659-8ed8-cc7bfece6d0a/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.521476 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" path="/var/lib/kubelet/pods/75f2198a-7d70-4447-b8c2-62ac40b5c167/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.522181 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7997250c-3018-44d1-9c9a-ff245889d239" path="/var/lib/kubelet/pods/7997250c-3018-44d1-9c9a-ff245889d239/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.523364 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979eb123-9af3-468e-8725-0dc8b8b2cb43" path="/var/lib/kubelet/pods/979eb123-9af3-468e-8725-0dc8b8b2cb43/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.524058 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa351d3a-ce77-4c06-8139-c4cdc669b330" path="/var/lib/kubelet/pods/aa351d3a-ce77-4c06-8139-c4cdc669b330/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.524929 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" path="/var/lib/kubelet/pods/c11163ee-e1e7-47a7-a454-610a8b27542f/volumes" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.529933 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.531029 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.534633 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.537461 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4" (OuterVolumeSpecName: "kube-api-access-kgqz4") pod "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" (UID: "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1"). InnerVolumeSpecName "kube-api-access-kgqz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.540703 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.544843 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.544900 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84" (OuterVolumeSpecName: "kube-api-access-c2v84") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "kube-api-access-c2v84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.544948 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t" (OuterVolumeSpecName: "kube-api-access-l5w2t") pod "43d09b31-ee49-498b-bbaf-368e53723f62" (UID: "43d09b31-ee49-498b-bbaf-368e53723f62"). InnerVolumeSpecName "kube-api-access-l5w2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.549109 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv" (OuterVolumeSpecName: "kube-api-access-7j4mv") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "kube-api-access-7j4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.557975 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data" (OuterVolumeSpecName: "config-data") pod "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" (UID: "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.561634 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.576411 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d09b31-ee49-498b-bbaf-368e53723f62" (UID: "43d09b31-ee49-498b-bbaf-368e53723f62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.602377 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618360 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618380 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5w2t\" (UniqueName: \"kubernetes.io/projected/43d09b31-ee49-498b-bbaf-368e53723f62-kube-api-access-l5w2t\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618392 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618401 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618411 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618421 4988 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618429 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4mv\" (UniqueName: \"kubernetes.io/projected/351d084c-73d8-4965-97c8-407826793cd6-kube-api-access-7j4mv\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618440 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618448 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/351d084c-73d8-4965-97c8-407826793cd6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618456 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/09c548ca-78f0-4e91-8a5d-dce756b0421e-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618464 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2v84\" (UniqueName: \"kubernetes.io/projected/09c548ca-78f0-4e91-8a5d-dce756b0421e-kube-api-access-c2v84\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618480 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618489 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618498 4988 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/09c548ca-78f0-4e91-8a5d-dce756b0421e-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.618505 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgqz4\" (UniqueName: \"kubernetes.io/projected/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-kube-api-access-kgqz4\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.652950 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.659550 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data" (OuterVolumeSpecName: "config-data") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.665748 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="galera" containerID="cri-o://b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5" gracePeriod=30 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.669312 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.674439 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" (UID: "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.679205 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" (UID: "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.686563 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.689001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" (UID: "c4c4a2cd-004d-42ad-bfee-3ec44daff1f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693390 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rpqfv"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693413 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rpqfv"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693427 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693439 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693449 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-23d0-account-create-dwcp7"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693460 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693469 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-23d0-account-create-dwcp7"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693481 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693490 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693506 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-h7wgh"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693516 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-h7wgh"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693528 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.693539 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-21b3-account-create-p2lz4"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.694651 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data" (OuterVolumeSpecName: "config-data") pod "43d09b31-ee49-498b-bbaf-368e53723f62" (UID: "43d09b31-ee49-498b-bbaf-368e53723f62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.696281 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "351d084c-73d8-4965-97c8-407826793cd6" (UID: "351d084c-73d8-4965-97c8-407826793cd6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.702136 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-21b3-account-create-p2lz4"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.716517 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:55932->10.217.0.201:8775: read: connection reset by peer" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.716916 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:55918->10.217.0.201:8775: read: connection reset by peer" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722404 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722429 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d09b31-ee49-498b-bbaf-368e53723f62-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722440 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722448 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722458 4988 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722466 4988 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722475 4988 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722483 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722491 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/351d084c-73d8-4965-97c8-407826793cd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.722700 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09c548ca-78f0-4e91-8a5d-dce756b0421e" (UID: "09c548ca-78f0-4e91-8a5d-dce756b0421e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.728972 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.172:8776/healthcheck\": read tcp 10.217.0.2:35374->10.217.0.172:8776: read: connection reset by peer" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.732714 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-crz5q"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.740118 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-crz5q"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.744990 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bfce-account-create-4qbgx"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.754675 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.762238 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-bfce-account-create-4qbgx"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:36.824229 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c548ca-78f0-4e91-8a5d-dce756b0421e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.005861 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.181:8081/readyz\": dial tcp 10.217.0.181:8081: connect: connection refused" Nov 23 07:10:37 crc kubenswrapper[4988]: E1123 07:10:37.226924 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:37 crc kubenswrapper[4988]: E1123 07:10:37.229576 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:37 crc kubenswrapper[4988]: E1123 07:10:37.231660 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:10:37 crc kubenswrapper[4988]: E1123 07:10:37.231739 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerName="nova-cell0-conductor-conductor" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.241639 4988 generic.go:334] "Generic (PLEG): container finished" podID="d4be2080-1204-4f6e-ac00-bba757695872" containerID="70c5ffb584b9bbe5c2b209c28d137387ef7c662311797b190ca13786ca38138a" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.241695 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerDied","Data":"70c5ffb584b9bbe5c2b209c28d137387ef7c662311797b190ca13786ca38138a"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.248475 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"43d09b31-ee49-498b-bbaf-368e53723f62","Type":"ContainerDied","Data":"1c3cd1d6db4c354043571ed95c1110021a826f29ac6346e0bb4a917df6ef9cb5"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.248502 4988 scope.go:117] "RemoveContainer" containerID="b6602cacfb54ba21900dbc87f502e4a509824629f836b6caefca072a6fea1d75" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.248581 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.258049 4988 generic.go:334] "Generic (PLEG): container finished" podID="61784d29-67cb-4150-923e-0e819bdde923" containerID="9177522bc27ecd27e87dbfaffac6a6f6968557f56ceb08013aef630c895b62fb" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.258102 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerDied","Data":"9177522bc27ecd27e87dbfaffac6a6f6968557f56ceb08013aef630c895b62fb"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.260903 4988 generic.go:334] "Generic (PLEG): container finished" podID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerID="d2b1f46c3d98eca77fce5ba02072ca8a40d4ce3218cb93220d983e5effd1b750" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.260956 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerDied","Data":"d2b1f46c3d98eca77fce5ba02072ca8a40d4ce3218cb93220d983e5effd1b750"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.270146 4988 generic.go:334] "Generic (PLEG): container finished" podID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerID="482947669a01cf95a71f90baa22b06aeac92eb3bd55c440708e11d5e72e1f4ca" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.270255 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerDied","Data":"482947669a01cf95a71f90baa22b06aeac92eb3bd55c440708e11d5e72e1f4ca"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.277100 4988 generic.go:334] "Generic (PLEG): container finished" podID="ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" containerID="66c15cd32a49446bf787535a83ba95c1099387914c033c58e0a73521f56571b7" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.277153 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05c20-account-delete-fvdvh" event={"ID":"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60","Type":"ContainerDied","Data":"66c15cd32a49446bf787535a83ba95c1099387914c033c58e0a73521f56571b7"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.286284 4988 generic.go:334] "Generic (PLEG): container finished" podID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerID="ffc32302cd38e863bb6d6aaea86be25fa12696798a187ea26576d320a9c5ccd3" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.286391 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerDied","Data":"ffc32302cd38e863bb6d6aaea86be25fa12696798a187ea26576d320a9c5ccd3"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.322479 4988 generic.go:334] "Generic (PLEG): container finished" podID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerID="8d370a258077eef29df07553ebb57bc3f0df94518539e125a0c3eaef83ef1b5b" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.322565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerDied","Data":"8d370a258077eef29df07553ebb57bc3f0df94518539e125a0c3eaef83ef1b5b"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.327942 4988 generic.go:334] "Generic (PLEG): container finished" podID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerID="d1dbf13c4c51f91d80504e9813025e575415f2d87f015a192bbdff65a11f6ae1" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.327999 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerDied","Data":"d1dbf13c4c51f91d80504e9813025e575415f2d87f015a192bbdff65a11f6ae1"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.337620 4988 generic.go:334] "Generic (PLEG): container finished" podID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerID="c3aa642238bf2e182f6aa8b168ebe96dd9671c5155884f7a97375c632ebe4f02" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.337694 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerDied","Data":"c3aa642238bf2e182f6aa8b168ebe96dd9671c5155884f7a97375c632ebe4f02"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343470 4988 generic.go:334] "Generic (PLEG): container finished" podID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerID="543aecdb571138b8617239f7cdae649de1a2ea369419630d777d643c64fc0d98" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343494 4988 generic.go:334] "Generic (PLEG): container finished" podID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerID="bb50deda847e5c4d01192688b630e90316d62e3b678ef1a62e6da9a8a390be52" exitCode=2 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343501 4988 generic.go:334] "Generic (PLEG): container finished" podID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerID="c03076932f1bd8fe2ee2079c7b8e87ba81b2baeb499ec22610473dfd45ba6936" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343508 4988 generic.go:334] "Generic (PLEG): container finished" podID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerID="0f03ec543429a626c8d33b783b6684da955e2b3df62fa5e03977931c6cff0b9b" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343559 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerDied","Data":"543aecdb571138b8617239f7cdae649de1a2ea369419630d777d643c64fc0d98"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343583 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerDied","Data":"bb50deda847e5c4d01192688b630e90316d62e3b678ef1a62e6da9a8a390be52"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343593 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerDied","Data":"c03076932f1bd8fe2ee2079c7b8e87ba81b2baeb499ec22610473dfd45ba6936"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.343602 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerDied","Data":"0f03ec543429a626c8d33b783b6684da955e2b3df62fa5e03977931c6cff0b9b"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.354327 4988 generic.go:334] "Generic (PLEG): container finished" podID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerID="5ebe52496c238117a9598c27a6bbc10f3777cd5ad280a8dd4625534d34f3fa75" exitCode=2 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.354458 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3","Type":"ContainerDied","Data":"5ebe52496c238117a9598c27a6bbc10f3777cd5ad280a8dd4625534d34f3fa75"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.360477 4988 generic.go:334] "Generic (PLEG): container finished" podID="4b82b9a4-6707-446c-abea-2d4a560a43d7" containerID="afb06f3bee648e472f785867007795ba2de84b2e773e3bec1d5d3afef328edb9" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.360535 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron21b3-account-delete-p2f8j" event={"ID":"4b82b9a4-6707-446c-abea-2d4a560a43d7","Type":"ContainerDied","Data":"afb06f3bee648e472f785867007795ba2de84b2e773e3bec1d5d3afef328edb9"} Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.362618 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6bfdb6f865-pn8fq" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.362856 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.364744 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.408690 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f7fdc8956-g6vw5" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.408971 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f7fdc8956-g6vw5" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.631257 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.650179 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.677270 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6bfdb6f865-pn8fq"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.724447 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.736933 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.750811 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts\") pod \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.750885 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vltj\" (UniqueName: \"kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj\") pod \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\" (UID: \"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.754301 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" (UID: "09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.757060 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj" (OuterVolumeSpecName: "kube-api-access-5vltj") pod "09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" (UID: "09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d"). InnerVolumeSpecName "kube-api-access-5vltj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.761869 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.783409 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.803776 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.832712 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.844690 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.852639 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.852672 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vltj\" (UniqueName: \"kubernetes.io/projected/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d-kube-api-access-5vltj\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.898691 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.901582 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.937539 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.947304 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.955829 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.955864 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.955907 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956030 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956048 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956073 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nfjn\" (UniqueName: \"kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956109 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956140 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.956224 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom\") pod \"d4be2080-1204-4f6e-ac00-bba757695872\" (UID: \"d4be2080-1204-4f6e-ac00-bba757695872\") " Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.957638 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.957891 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs" (OuterVolumeSpecName: "logs") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.960060 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.968572 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts" (OuterVolumeSpecName: "scripts") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.969287 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn" (OuterVolumeSpecName: "kube-api-access-4nfjn") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "kube-api-access-4nfjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.969510 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.979546 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:37 crc kubenswrapper[4988]: I1123 07:10:37.985940 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.005818 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.044142 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059025 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059058 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059088 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059141 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059220 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059253 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059300 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xsp\" (UniqueName: \"kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp\") pod \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059325 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059339 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059381 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs\") pod \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059412 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config\") pod \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059449 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data\") pod \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059476 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059497 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059532 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059557 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom\") pod \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059574 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059608 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle\") pod \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059627 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkmz\" (UniqueName: \"kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059662 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059726 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs\") pod \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059771 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle\") pod \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\" (UID: \"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059799 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059828 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059871 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqsbd\" (UniqueName: \"kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059893 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059940 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059974 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fgbc\" (UniqueName: \"kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc\") pod \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\" (UID: \"fbae5c0b-cb91-459a-acb7-e494aedd6d99\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.059990 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060033 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060054 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060100 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngl87\" (UniqueName: \"kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060116 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts\") pod \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\" (UID: \"4b6f8f28-b3df-4d34-a898-74f4dc12f201\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060132 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts\") pod \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\" (UID: \"6095bfb4-4519-4c6a-9ded-5f8a0db254c1\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060152 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle\") pod \"61784d29-67cb-4150-923e-0e819bdde923\" (UID: \"61784d29-67cb-4150-923e-0e819bdde923\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060211 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm7xk\" (UniqueName: \"kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060777 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060790 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060799 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d4be2080-1204-4f6e-ac00-bba757695872-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060807 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060846 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nfjn\" (UniqueName: \"kubernetes.io/projected/d4be2080-1204-4f6e-ac00-bba757695872-kube-api-access-4nfjn\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.060858 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4be2080-1204-4f6e-ac00-bba757695872-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.068415 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.070373 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.074502 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd" (OuterVolumeSpecName: "kube-api-access-zqsbd") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "kube-api-access-zqsbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.074790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs" (OuterVolumeSpecName: "logs") pod "fbae5c0b-cb91-459a-acb7-e494aedd6d99" (UID: "fbae5c0b-cb91-459a-acb7-e494aedd6d99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.076923 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs" (OuterVolumeSpecName: "logs") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.078544 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs" (OuterVolumeSpecName: "logs") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.081617 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.082456 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs" (OuterVolumeSpecName: "logs") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.158763 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts" (OuterVolumeSpecName: "scripts") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.158830 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk" (OuterVolumeSpecName: "kube-api-access-tm7xk") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42"). InnerVolumeSpecName "kube-api-access-tm7xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.158911 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz" (OuterVolumeSpecName: "kube-api-access-bwkmz") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "kube-api-access-bwkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.158936 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data" (OuterVolumeSpecName: "config-data") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.159025 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d4be2080-1204-4f6e-ac00-bba757695872" (UID: "d4be2080-1204-4f6e-ac00-bba757695872"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.159156 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161497 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle\") pod \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161591 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config\") pod \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161617 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle\") pod \"9afe27be-257c-4ea4-84c1-e41a289ad06a\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161636 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7zvq\" (UniqueName: \"kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq\") pod \"9afe27be-257c-4ea4-84c1-e41a289ad06a\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161681 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data\") pod \"9afe27be-257c-4ea4-84c1-e41a289ad06a\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m52kk\" (UniqueName: \"kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk\") pod \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161802 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom\") pod \"9afe27be-257c-4ea4-84c1-e41a289ad06a\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161841 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data\") pod \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.161932 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs\") pod \"9afe27be-257c-4ea4-84c1-e41a289ad06a\" (UID: \"9afe27be-257c-4ea4-84c1-e41a289ad06a\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162025 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs\") pod \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\" (UID: \"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162447 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b6f8f28-b3df-4d34-a898-74f4dc12f201-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162465 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162475 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162482 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm7xk\" (UniqueName: \"kubernetes.io/projected/2a28a2bd-cf03-47d7-b142-63b066fdeb42-kube-api-access-tm7xk\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162490 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162500 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162508 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162516 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162524 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwkmz\" (UniqueName: \"kubernetes.io/projected/61784d29-67cb-4150-923e-0e819bdde923-kube-api-access-bwkmz\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162532 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbae5c0b-cb91-459a-acb7-e494aedd6d99-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162540 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61784d29-67cb-4150-923e-0e819bdde923-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162548 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a28a2bd-cf03-47d7-b142-63b066fdeb42-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162556 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqsbd\" (UniqueName: \"kubernetes.io/projected/4b6f8f28-b3df-4d34-a898-74f4dc12f201-kube-api-access-zqsbd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.162564 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4be2080-1204-4f6e-ac00-bba757695872-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.164503 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data" (OuterVolumeSpecName: "config-data") pod "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" (UID: "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.164611 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" (UID: "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.164732 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87" (OuterVolumeSpecName: "kube-api-access-ngl87") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "kube-api-access-ngl87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.165079 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs" (OuterVolumeSpecName: "logs") pod "9afe27be-257c-4ea4-84c1-e41a289ad06a" (UID: "9afe27be-257c-4ea4-84c1-e41a289ad06a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.173074 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts" (OuterVolumeSpecName: "scripts") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.173459 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fbae5c0b-cb91-459a-acb7-e494aedd6d99" (UID: "fbae5c0b-cb91-459a-acb7-e494aedd6d99"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.173544 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq" (OuterVolumeSpecName: "kube-api-access-q7zvq") pod "9afe27be-257c-4ea4-84c1-e41a289ad06a" (UID: "9afe27be-257c-4ea4-84c1-e41a289ad06a"). InnerVolumeSpecName "kube-api-access-q7zvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.173803 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp" (OuterVolumeSpecName: "kube-api-access-t5xsp") pod "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" (UID: "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3"). InnerVolumeSpecName "kube-api-access-t5xsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.183515 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc" (OuterVolumeSpecName: "kube-api-access-7fgbc") pod "fbae5c0b-cb91-459a-acb7-e494aedd6d99" (UID: "fbae5c0b-cb91-459a-acb7-e494aedd6d99"). InnerVolumeSpecName "kube-api-access-7fgbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.200862 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.218284 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk" (OuterVolumeSpecName: "kube-api-access-m52kk") pod "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" (UID: "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6"). InnerVolumeSpecName "kube-api-access-m52kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.224001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9afe27be-257c-4ea4-84c1-e41a289ad06a" (UID: "9afe27be-257c-4ea4-84c1-e41a289ad06a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.252770 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264716 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264743 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264755 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9afe27be-257c-4ea4-84c1-e41a289ad06a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264766 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fgbc\" (UniqueName: \"kubernetes.io/projected/fbae5c0b-cb91-459a-acb7-e494aedd6d99-kube-api-access-7fgbc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264780 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngl87\" (UniqueName: \"kubernetes.io/projected/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-kube-api-access-ngl87\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264791 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264802 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xsp\" (UniqueName: \"kubernetes.io/projected/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-api-access-t5xsp\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264813 4988 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264825 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7zvq\" (UniqueName: \"kubernetes.io/projected/9afe27be-257c-4ea4-84c1-e41a289ad06a-kube-api-access-q7zvq\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264836 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m52kk\" (UniqueName: \"kubernetes.io/projected/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-kube-api-access-m52kk\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.264847 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.353934 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.361980 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbae5c0b-cb91-459a-acb7-e494aedd6d99" (UID: "fbae5c0b-cb91-459a-acb7-e494aedd6d99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.364823 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9afe27be-257c-4ea4-84c1-e41a289ad06a" (UID: "9afe27be-257c-4ea4-84c1-e41a289ad06a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.365608 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx24t\" (UniqueName: \"kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t\") pod \"4b82b9a4-6707-446c-abea-2d4a560a43d7\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.365725 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts\") pod \"4b82b9a4-6707-446c-abea-2d4a560a43d7\" (UID: \"4b82b9a4-6707-446c-abea-2d4a560a43d7\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.365791 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts\") pod \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.365874 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6tlv\" (UniqueName: \"kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv\") pod \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\" (UID: \"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.366747 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" (UID: "ece4b7bd-2b01-4ad3-8782-a4d7341f0b60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.366771 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b82b9a4-6707-446c-abea-2d4a560a43d7" (UID: "4b82b9a4-6707-446c-abea-2d4a560a43d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.367845 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.367869 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.367883 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b82b9a4-6707-446c-abea-2d4a560a43d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.367921 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.367937 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.373384 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t" (OuterVolumeSpecName: "kube-api-access-gx24t") pod "4b82b9a4-6707-446c-abea-2d4a560a43d7" (UID: "4b82b9a4-6707-446c-abea-2d4a560a43d7"). InnerVolumeSpecName "kube-api-access-gx24t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.373427 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv" (OuterVolumeSpecName: "kube-api-access-l6tlv") pod "ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" (UID: "ece4b7bd-2b01-4ad3-8782-a4d7341f0b60"). InnerVolumeSpecName "kube-api-access-l6tlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.374964 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.384377 4988 generic.go:334] "Generic (PLEG): container finished" podID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerID="7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" exitCode=0 Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.384440 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72","Type":"ContainerDied","Data":"7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.384465 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72","Type":"ContainerDied","Data":"b9068e4ca5525b90c63f7170c02e1c06cb197a62a113bf58483488a37d80734f"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.384476 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9068e4ca5525b90c63f7170c02e1c06cb197a62a113bf58483488a37d80734f" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.386378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d4be2080-1204-4f6e-ac00-bba757695872","Type":"ContainerDied","Data":"f2bcf89eb1f7712b654f2d3df8898479ed7056451954cf97154c49d5d2280882"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.386424 4988 scope.go:117] "RemoveContainer" containerID="70c5ffb584b9bbe5c2b209c28d137387ef7c662311797b190ca13786ca38138a" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.386547 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.392546 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron21b3-account-delete-p2f8j" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.392725 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron21b3-account-delete-p2f8j" event={"ID":"4b82b9a4-6707-446c-abea-2d4a560a43d7","Type":"ContainerDied","Data":"fadf49234bf96e8e8d34727141627f1b2ae06f923577a71a5a0b18d5245dd48e"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.392868 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fadf49234bf96e8e8d34727141627f1b2ae06f923577a71a5a0b18d5245dd48e" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.395895 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.396828 4988 generic.go:334] "Generic (PLEG): container finished" podID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerID="fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7" exitCode=0 Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.396913 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.396928 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6","Type":"ContainerDied","Data":"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.396949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cb35e7c-c792-48c9-8f52-ac3e9cc283f6","Type":"ContainerDied","Data":"1b3c8c140876172d79a9382a024018f0383865bdc978003bdcd13ce08b6ea01c"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.400344 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3","Type":"ContainerDied","Data":"71726a59e9e1609b07f6ee83488e520e1d596d9ed0079c42977f852e80b853e4"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.400426 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.402492 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement2c17-account-delete-ps9t4" event={"ID":"52da7e45-da4b-4b22-b4d9-de675091c282","Type":"ContainerDied","Data":"975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.402549 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975710bb3902fce1fe0e8ed4bbc591e1ef4ddc8c6c00163527cf4d554787bbdd" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.404794 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapicb35-account-delete-wsxgk" event={"ID":"610d9cd6-32c2-4a24-a462-df3c8da3f90f","Type":"ContainerDied","Data":"d84d30bc4ed1d46d7e95bda76d91fc21f8a621040bcb425bfb55f1cf331cdc3b"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.404867 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84d30bc4ed1d46d7e95bda76d91fc21f8a621040bcb425bfb55f1cf331cdc3b" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.407392 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55cfdd5f8d-kn94x" event={"ID":"4b6f8f28-b3df-4d34-a898-74f4dc12f201","Type":"ContainerDied","Data":"c798f28a4047f85fffb38a1e4ed84f413eccd05375411a4daf12143fced14079"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.407481 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55cfdd5f8d-kn94x" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.411910 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a28a2bd-cf03-47d7-b142-63b066fdeb42","Type":"ContainerDied","Data":"4dbea90f64bb68ab234178a7ad9e71acb72502e419e98844c6dffb29010994aa"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.411920 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.414462 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance23d0-account-delete-g2lxz" event={"ID":"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7","Type":"ContainerDied","Data":"e2b98ead01b43d859c2b773b3b3a8acaa4e7a6e431a6a456a9266454056df44b"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.414502 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2b98ead01b43d859c2b773b3b3a8acaa4e7a6e431a6a456a9266454056df44b" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.430749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6095bfb4-4519-4c6a-9ded-5f8a0db254c1","Type":"ContainerDied","Data":"dcbf540137dbef8a8283a5884d19e2bdf8ac75f36eb9b709d3986aaf7ca029b1"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.430861 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.440320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder80d1-account-delete-sq4dt" event={"ID":"09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d","Type":"ContainerDied","Data":"bc91da899038608454e4eb0590a93b35d538e2132fca26e6aa02ba99e96836a9"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.440355 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc91da899038608454e4eb0590a93b35d538e2132fca26e6aa02ba99e96836a9" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.440417 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder80d1-account-delete-sq4dt" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.442339 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data" (OuterVolumeSpecName: "config-data") pod "9afe27be-257c-4ea4-84c1-e41a289ad06a" (UID: "9afe27be-257c-4ea4-84c1-e41a289ad06a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.444787 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell05c20-account-delete-fvdvh" event={"ID":"ece4b7bd-2b01-4ad3-8782-a4d7341f0b60","Type":"ContainerDied","Data":"c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.444826 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a0f76d21eb88ec11067621dd036c091531ffd932e88f3154c2243c0dbedbe0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.444944 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell05c20-account-delete-fvdvh" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.453685 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.454365 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7888d7fbb9-cqj2f" event={"ID":"fbae5c0b-cb91-459a-acb7-e494aedd6d99","Type":"ContainerDied","Data":"27c13e65b896d9912c2a429f95509a8b45f5f1bf4cdadbd39cc61f88e3b8c6b0"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.461970 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" event={"ID":"9afe27be-257c-4ea4-84c1-e41a289ad06a","Type":"ContainerDied","Data":"46855b109dd7a5bcb9825ab463a7ff752920ec20b6f706f3f5e06beb02a61ed3"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.462059 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf555f794-8vm7k" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.470980 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.471007 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6tlv\" (UniqueName: \"kubernetes.io/projected/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60-kube-api-access-l6tlv\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.471019 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9afe27be-257c-4ea4-84c1-e41a289ad06a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.471031 4988 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.471042 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx24t\" (UniqueName: \"kubernetes.io/projected/4b82b9a4-6707-446c-abea-2d4a560a43d7-kube-api-access-gx24t\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.471914 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" (UID: "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.472007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanbfce-account-delete-mthln" event={"ID":"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410","Type":"ContainerDied","Data":"857347ca0d00e270378e1d776f5a8e372e8308b59c30a33303161be126a3ed59"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.472054 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857347ca0d00e270378e1d776f5a8e372e8308b59c30a33303161be126a3ed59" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.474164 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f7fdc8956-g6vw5" event={"ID":"61784d29-67cb-4150-923e-0e819bdde923","Type":"ContainerDied","Data":"ea8da0111c8587b59ae12fccf4684321f6bb6b2d51128c9eefb28473c72a5363"} Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.474433 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f7fdc8956-g6vw5" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.507319 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.516261 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" (UID: "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.517562 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" path="/var/lib/kubelet/pods/09c548ca-78f0-4e91-8a5d-dce756b0421e/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.518210 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7b2124-7faf-4de1-ac89-15eeccc1abe7" path="/var/lib/kubelet/pods/1f7b2124-7faf-4de1-ac89-15eeccc1abe7/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.518724 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208b3820-2fbb-4e5b-bfa1-170b30f28af6" path="/var/lib/kubelet/pods/208b3820-2fbb-4e5b-bfa1-170b30f28af6/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.519744 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a90861d-abe6-4af4-b8ae-ac44f7d1b748" path="/var/lib/kubelet/pods/2a90861d-abe6-4af4-b8ae-ac44f7d1b748/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.522147 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data podName:2a28a2bd-cf03-47d7-b142-63b066fdeb42 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:39.022123005 +0000 UTC m=+1491.330635768 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42") : error deleting /var/lib/kubelet/pods/2a28a2bd-cf03-47d7-b142-63b066fdeb42/volume-subpaths: remove /var/lib/kubelet/pods/2a28a2bd-cf03-47d7-b142-63b066fdeb42/volume-subpaths: no such file or directory Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.523261 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.526020 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data" (OuterVolumeSpecName: "config-data") pod "fbae5c0b-cb91-459a-acb7-e494aedd6d99" (UID: "fbae5c0b-cb91-459a-acb7-e494aedd6d99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.533976 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351d084c-73d8-4965-97c8-407826793cd6" path="/var/lib/kubelet/pods/351d084c-73d8-4965-97c8-407826793cd6/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.534813 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" (UID: "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.537164 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" path="/var/lib/kubelet/pods/43d09b31-ee49-498b-bbaf-368e53723f62/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.538068 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f33e05-cf5d-4946-b57b-1cec9f01352b" path="/var/lib/kubelet/pods/51f33e05-cf5d-4946-b57b-1cec9f01352b/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.538282 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.544026 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd80c51-5410-48ee-98da-4c6509b59e04" path="/var/lib/kubelet/pods/5dd80c51-5410-48ee-98da-4c6509b59e04/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.548258 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" path="/var/lib/kubelet/pods/832df8ad-6b73-46a8-979f-ec3887c49e83/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.550464 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88aa4931-d135-4771-90fb-302c92874f9e" path="/var/lib/kubelet/pods/88aa4931-d135-4771-90fb-302c92874f9e/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.551200 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" path="/var/lib/kubelet/pods/a2f5e1e6-0051-487f-b9ca-76003e7deed1/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.552186 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" path="/var/lib/kubelet/pods/c4c4a2cd-004d-42ad-bfee-3ec44daff1f1/volumes" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.557304 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.572982 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573051 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573064 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573076 4988 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573089 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbae5c0b-cb91-459a-acb7-e494aedd6d99-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573182 4988 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573268 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.573280 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.576711 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" (UID: "39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.579425 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.587618 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data" (OuterVolumeSpecName: "config-data") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.590158 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.590242 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b6f8f28-b3df-4d34-a898-74f4dc12f201" (UID: "4b6f8f28-b3df-4d34-a898-74f4dc12f201"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.591702 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" (UID: "9cb35e7c-c792-48c9-8f52-ac3e9cc283f6"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.600369 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data" (OuterVolumeSpecName: "config-data") pod "6095bfb4-4519-4c6a-9ded-5f8a0db254c1" (UID: "6095bfb4-4519-4c6a-9ded-5f8a0db254c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.602129 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data" (OuterVolumeSpecName: "config-data") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.608850 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61784d29-67cb-4150-923e-0e819bdde923" (UID: "61784d29-67cb-4150-923e-0e819bdde923"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.676579 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.679366 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.679468 4988 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.679772 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.679857 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61784d29-67cb-4150-923e-0e819bdde923-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.680001 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.680064 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6f8f28-b3df-4d34-a898-74f4dc12f201-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.680115 4988 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.680182 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.680275 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6095bfb4-4519-4c6a-9ded-5f8a0db254c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.684207 4988 scope.go:117] "RemoveContainer" containerID="0528954f5da33c5e64f1f55abc59161faf590954a1985c469ea4c1f06355f574" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.722634 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.738280 4988 scope.go:117] "RemoveContainer" containerID="fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.761319 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.784368 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d94jl\" (UniqueName: \"kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl\") pod \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.784493 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts\") pod \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\" (UID: \"81f3e05f-502b-4e0a-b5a2-4ab8ec42c410\") " Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.784836 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.785649 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" (UID: "81f3e05f-502b-4e0a-b5a2-4ab8ec42c410"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.785760 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.785998 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.786072 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.789060 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.790024 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl" (OuterVolumeSpecName: "kube-api-access-d94jl") pod "81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" (UID: "81f3e05f-502b-4e0a-b5a2-4ab8ec42c410"). InnerVolumeSpecName "kube-api-access-d94jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.792727 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.794322 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.800683 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.807279 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.807325 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.816702 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" probeResult="failure" output="command timed out" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.836338 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.846135 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell05c20-account-delete-fvdvh"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.848123 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.876784 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:10:38 crc kubenswrapper[4988]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 23 07:10:38 crc kubenswrapper[4988]: > Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.887266 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m8gh\" (UniqueName: \"kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh\") pod \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.887345 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts\") pod \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\" (UID: \"c8d7a2fe-56c9-4c21-98b4-88c12252bbe7\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.887441 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c54xl\" (UniqueName: \"kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl\") pod \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.887623 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data\") pod \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.887684 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle\") pod \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\" (UID: \"f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.888264 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.888841 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d94jl\" (UniqueName: \"kubernetes.io/projected/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-kube-api-access-d94jl\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.888868 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.896396 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" (UID: "c8d7a2fe-56c9-4c21-98b4-88c12252bbe7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.897173 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.900593 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.904263 4988 scope.go:117] "RemoveContainer" containerID="fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7" Nov 23 07:10:38 crc kubenswrapper[4988]: E1123 07:10:38.905827 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7\": container with ID starting with fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7 not found: ID does not exist" containerID="fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.906245 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7"} err="failed to get container status \"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7\": rpc error: code = NotFound desc = could not find container \"fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7\": container with ID starting with fecbb01a832166acead1a82796faf6599865c1e307a2aa1a777058e62d4bddb7 not found: ID does not exist" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.906283 4988 scope.go:117] "RemoveContainer" containerID="5ebe52496c238117a9598c27a6bbc10f3777cd5ad280a8dd4625534d34f3fa75" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.920092 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl" (OuterVolumeSpecName: "kube-api-access-c54xl") pod "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" (UID: "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72"). InnerVolumeSpecName "kube-api-access-c54xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.922597 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh" (OuterVolumeSpecName: "kube-api-access-4m8gh") pod "c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" (UID: "c8d7a2fe-56c9-4c21-98b4-88c12252bbe7"). InnerVolumeSpecName "kube-api-access-4m8gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.940456 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.940902 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" (UID: "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.967419 4988 scope.go:117] "RemoveContainer" containerID="c3aa642238bf2e182f6aa8b168ebe96dd9671c5155884f7a97375c632ebe4f02" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.975979 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.984324 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron21b3-account-delete-p2f8j"] Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990670 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsrv\" (UniqueName: \"kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv\") pod \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5znj\" (UniqueName: \"kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj\") pod \"52da7e45-da4b-4b22-b4d9-de675091c282\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990735 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990755 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts\") pod \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\" (UID: \"610d9cd6-32c2-4a24-a462-df3c8da3f90f\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990774 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990833 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990889 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw8zv\" (UniqueName: \"kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990909 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990928 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts\") pod \"52da7e45-da4b-4b22-b4d9-de675091c282\" (UID: \"52da7e45-da4b-4b22-b4d9-de675091c282\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990952 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990970 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.990984 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991014 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991061 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991100 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991123 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brc9m\" (UniqueName: \"kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991144 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991158 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle\") pod \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\" (UID: \"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991206 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\" (UID: \"720d09f3-1104-47a0-93e9-ffb48cf1ae69\") " Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991502 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m8gh\" (UniqueName: \"kubernetes.io/projected/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-kube-api-access-4m8gh\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991519 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991530 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c54xl\" (UniqueName: \"kubernetes.io/projected/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-kube-api-access-c54xl\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:38 crc kubenswrapper[4988]: I1123 07:10:38.991539 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:38.997600 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52da7e45-da4b-4b22-b4d9-de675091c282" (UID: "52da7e45-da4b-4b22-b4d9-de675091c282"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:38.997844 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:38.998313 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:38.998758 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs" (OuterVolumeSpecName: "logs") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.003749 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs" (OuterVolumeSpecName: "logs") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.004699 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts" (OuterVolumeSpecName: "scripts") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.005494 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.006174 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "610d9cd6-32c2-4a24-a462-df3c8da3f90f" (UID: "610d9cd6-32c2-4a24-a462-df3c8da3f90f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.008957 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.013310 4988 scope.go:117] "RemoveContainer" containerID="db7496dcd1faf49f05c99168f0af122bee0300bc60d1e286d3f55f6eb98a7498" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.016024 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7bf555f794-8vm7k"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.016778 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m" (OuterVolumeSpecName: "kube-api-access-brc9m") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "kube-api-access-brc9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.016880 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv" (OuterVolumeSpecName: "kube-api-access-fcsrv") pod "610d9cd6-32c2-4a24-a462-df3c8da3f90f" (UID: "610d9cd6-32c2-4a24-a462-df3c8da3f90f"). InnerVolumeSpecName "kube-api-access-fcsrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.016865 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.017362 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv" (OuterVolumeSpecName: "kube-api-access-xw8zv") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "kube-api-access-xw8zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.018596 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts" (OuterVolumeSpecName: "scripts") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.024823 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data" (OuterVolumeSpecName: "config-data") pod "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" (UID: "f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.034311 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj" (OuterVolumeSpecName: "kube-api-access-f5znj") pod "52da7e45-da4b-4b22-b4d9-de675091c282" (UID: "52da7e45-da4b-4b22-b4d9-de675091c282"). InnerVolumeSpecName "kube-api-access-f5znj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.055633 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.057712 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.063536 4988 scope.go:117] "RemoveContainer" containerID="d2b1f46c3d98eca77fce5ba02072ca8a40d4ce3218cb93220d983e5effd1b750" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.063549 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-55cfdd5f8d-kn94x"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.069124 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.069239 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.074063 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.088174 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.088383 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data" (OuterVolumeSpecName: "config-data") pod "720d09f3-1104-47a0-93e9-ffb48cf1ae69" (UID: "720d09f3-1104-47a0-93e9-ffb48cf1ae69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.088456 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.094301 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") pod \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\" (UID: \"2a28a2bd-cf03-47d7-b142-63b066fdeb42\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.094659 4988 scope.go:117] "RemoveContainer" containerID="dcd0354caf733195781e1f948fd33c5f325bbd542de067c894db308d568445d1" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095149 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095166 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095177 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brc9m\" (UniqueName: \"kubernetes.io/projected/720d09f3-1104-47a0-93e9-ffb48cf1ae69-kube-api-access-brc9m\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095186 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095221 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095233 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcsrv\" (UniqueName: \"kubernetes.io/projected/610d9cd6-32c2-4a24-a462-df3c8da3f90f-kube-api-access-fcsrv\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095243 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5znj\" (UniqueName: \"kubernetes.io/projected/52da7e45-da4b-4b22-b4d9-de675091c282-kube-api-access-f5znj\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095251 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/720d09f3-1104-47a0-93e9-ffb48cf1ae69-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095259 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/610d9cd6-32c2-4a24-a462-df3c8da3f90f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095268 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095281 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095290 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095298 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095316 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw8zv\" (UniqueName: \"kubernetes.io/projected/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-kube-api-access-xw8zv\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095325 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095334 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52da7e45-da4b-4b22-b4d9-de675091c282-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095342 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095350 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.095359 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/720d09f3-1104-47a0-93e9-ffb48cf1ae69-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.099265 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data" (OuterVolumeSpecName: "config-data") pod "2a28a2bd-cf03-47d7-b142-63b066fdeb42" (UID: "2a28a2bd-cf03-47d7-b142-63b066fdeb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.102315 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data" (OuterVolumeSpecName: "config-data") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.103632 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" (UID: "bd525929-59bb-4b7f-b3a4-12e2e4a03cd4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.103674 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7888d7fbb9-cqj2f"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.119121 4988 scope.go:117] "RemoveContainer" containerID="543aecdb571138b8617239f7cdae649de1a2ea369419630d777d643c64fc0d98" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.120235 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.121443 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.131260 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.133979 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.136283 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.141900 4988 scope.go:117] "RemoveContainer" containerID="bb50deda847e5c4d01192688b630e90316d62e3b678ef1a62e6da9a8a390be52" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.153372 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.164563 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.164645 4988 scope.go:117] "RemoveContainer" containerID="c03076932f1bd8fe2ee2079c7b8e87ba81b2baeb499ec22610473dfd45ba6936" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.169608 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-f7fdc8956-g6vw5"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.197129 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.197157 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a28a2bd-cf03-47d7-b142-63b066fdeb42-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.197168 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.197179 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.197208 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.210361 4988 scope.go:117] "RemoveContainer" containerID="0f03ec543429a626c8d33b783b6684da955e2b3df62fa5e03977931c6cff0b9b" Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.231110 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.232691 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.233715 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.233749 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.241682 4988 scope.go:117] "RemoveContainer" containerID="8d370a258077eef29df07553ebb57bc3f0df94518539e125a0c3eaef83ef1b5b" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.266846 4988 scope.go:117] "RemoveContainer" containerID="c20e80b908053c9ad38b943cfa24ecc2c59c1063094c7728511419afd22791ce" Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.298347 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.298407 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data podName:0b12d6f8-ea7a-4a60-b459-11563683791d nodeName:}" failed. No retries permitted until 2025-11-23 07:10:47.298392987 +0000 UTC m=+1499.606905750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data") pod "rabbitmq-server-0" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d") : configmap "rabbitmq-config-data" not found Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.298531 4988 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.298594 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data podName:692be1c8-4d8f-4676-89df-19f82b43f043 nodeName:}" failed. No retries permitted until 2025-11-23 07:10:47.298577722 +0000 UTC m=+1499.607090485 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data") pod "rabbitmq-cell1-server-0" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.334157 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.346004 4988 scope.go:117] "RemoveContainer" containerID="d1dbf13c4c51f91d80504e9813025e575415f2d87f015a192bbdff65a11f6ae1" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.348715 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.353732 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.375667 4988 scope.go:117] "RemoveContainer" containerID="398b87a138cec8f732064d3f7cb513a557fa9c4ba1deb887d5a4585196f85d30" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.398756 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zzz4\" (UniqueName: \"kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.398906 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.399008 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.399028 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.399075 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.399105 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle\") pod \"612dbf27-0967-4833-a62f-c86a008fe257\" (UID: \"612dbf27-0967-4833-a62f-c86a008fe257\") " Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.399452 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs" (OuterVolumeSpecName: "logs") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.413428 4988 scope.go:117] "RemoveContainer" containerID="9177522bc27ecd27e87dbfaffac6a6f6968557f56ceb08013aef630c895b62fb" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.419609 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.424679 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data" (OuterVolumeSpecName: "config-data") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.427513 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4" (OuterVolumeSpecName: "kube-api-access-9zzz4") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "kube-api-access-9zzz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.442530 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.453757 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "612dbf27-0967-4833-a62f-c86a008fe257" (UID: "612dbf27-0967-4833-a62f-c86a008fe257"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.479396 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.495596 4988 generic.go:334] "Generic (PLEG): container finished" podID="612dbf27-0967-4833-a62f-c86a008fe257" containerID="fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0" exitCode=0 Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.495653 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerDied","Data":"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0"} Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.495670 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"612dbf27-0967-4833-a62f-c86a008fe257","Type":"ContainerDied","Data":"6d4a82952ef89bf6a772c7a8d6db803889cc71e5960e06c24c1c93e959c39980"} Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.495724 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503715 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503744 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503754 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/612dbf27-0967-4833-a62f-c86a008fe257-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503766 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503776 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612dbf27-0967-4833-a62f-c86a008fe257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.503786 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zzz4\" (UniqueName: \"kubernetes.io/projected/612dbf27-0967-4833-a62f-c86a008fe257-kube-api-access-9zzz4\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.515135 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"720d09f3-1104-47a0-93e9-ffb48cf1ae69","Type":"ContainerDied","Data":"6e39c9b2114a3aef3d14e3c92c17e738ef53fc8cd9081f4ce39a37150392fe0f"} Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.515168 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.531987 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bd525929-59bb-4b7f-b3a4-12e2e4a03cd4","Type":"ContainerDied","Data":"a279773d7734d864599f45d58494784423d95659d4008a3b776645c134609b1b"} Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.532075 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.542779 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement2c17-account-delete-ps9t4" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.542855 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapicb35-account-delete-wsxgk" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.542946 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance23d0-account-delete-g2lxz" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.543039 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.543620 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanbfce-account-delete-mthln" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.682038 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.684003 4988 scope.go:117] "RemoveContainer" containerID="8afa739f5110059b465e6885ff5859dc6184950082e63a26a379905cd8929a41" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.693138 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.699412 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.714022 4988 scope.go:117] "RemoveContainer" containerID="fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.717827 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.741236 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.749273 4988 scope.go:117] "RemoveContainer" containerID="a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.754802 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance23d0-account-delete-g2lxz"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.774340 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.793645 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.799472 4988 scope.go:117] "RemoveContainer" containerID="fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0" Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.800004 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0\": container with ID starting with fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0 not found: ID does not exist" containerID="fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.800048 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0"} err="failed to get container status \"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0\": rpc error: code = NotFound desc = could not find container \"fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0\": container with ID starting with fb92e13565e5781f1a10bb35057e610d00e79c16ca0e8a319d6e364f367b81e0 not found: ID does not exist" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.800118 4988 scope.go:117] "RemoveContainer" containerID="a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a" Nov 23 07:10:39 crc kubenswrapper[4988]: E1123 07:10:39.800502 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a\": container with ID starting with a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a not found: ID does not exist" containerID="a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.800538 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a"} err="failed to get container status \"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a\": rpc error: code = NotFound desc = could not find container \"a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a\": container with ID starting with a9e67c0908ff9aee4332ff4a21c25f6127d8260b033e0380fbe2c00e2439ff5a not found: ID does not exist" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.800560 4988 scope.go:117] "RemoveContainer" containerID="ffc32302cd38e863bb6d6aaea86be25fa12696798a187ea26576d320a9c5ccd3" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.806771 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.848644 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbicanbfce-account-delete-mthln"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.854343 4988 scope.go:117] "RemoveContainer" containerID="bc100c14d5a403c7cb084cb19a58629ec50c4006b31564afe6806ad8247c5c3d" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.858173 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.866018 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.877106 4988 scope.go:117] "RemoveContainer" containerID="482947669a01cf95a71f90baa22b06aeac92eb3bd55c440708e11d5e72e1f4ca" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.905313 4988 scope.go:117] "RemoveContainer" containerID="3efddc33cd9ce7c30bcaaa3df7dc5f188157ba378b301c9729e7ab2b1c6e2333" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.984476 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:10:39 crc kubenswrapper[4988]: I1123 07:10:39.990946 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.116418 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 is running failed: container process not found" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.116775 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 is running failed: container process not found" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.117004 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 is running failed: container process not found" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.117030 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.118790 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.118819 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.118834 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.118860 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.118886 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120665 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120721 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120771 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrmhk\" (UniqueName: \"kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120809 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120831 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120852 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120888 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120938 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120962 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.120988 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.121039 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdlj8\" (UniqueName: \"kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8\") pod \"7bde2362-ff90-47d5-845c-8dfcfe826a61\" (UID: \"7bde2362-ff90-47d5-845c-8dfcfe826a61\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.121097 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated\") pod \"be021496-c112-4578-bfe4-8639fa51480a\" (UID: \"be021496-c112-4578-bfe4-8639fa51480a\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.122392 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.122977 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.125478 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.126767 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.127673 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.131474 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8" (OuterVolumeSpecName: "kube-api-access-vdlj8") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "kube-api-access-vdlj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.133869 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk" (OuterVolumeSpecName: "kube-api-access-rrmhk") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "kube-api-access-rrmhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.135769 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts" (OuterVolumeSpecName: "scripts") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.138830 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.159425 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.169397 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data" (OuterVolumeSpecName: "config-data") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.185481 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.187013 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "be021496-c112-4578-bfe4-8639fa51480a" (UID: "be021496-c112-4578-bfe4-8639fa51480a"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.203884 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.210672 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bde2362-ff90-47d5-845c-8dfcfe826a61" (UID: "7bde2362-ff90-47d5-845c-8dfcfe826a61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222677 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdlj8\" (UniqueName: \"kubernetes.io/projected/7bde2362-ff90-47d5-845c-8dfcfe826a61-kube-api-access-vdlj8\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222710 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be021496-c112-4578-bfe4-8639fa51480a-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222723 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222737 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222749 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222760 4988 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222772 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222783 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222794 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrmhk\" (UniqueName: \"kubernetes.io/projected/be021496-c112-4578-bfe4-8639fa51480a-kube-api-access-rrmhk\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222805 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222816 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222826 4988 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be021496-c112-4578-bfe4-8639fa51480a-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222838 4988 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be021496-c112-4578-bfe4-8639fa51480a-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222850 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222882 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.222894 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7bde2362-ff90-47d5-845c-8dfcfe826a61-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.234156 4988 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 23 07:10:40 crc kubenswrapper[4988]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-23T07:10:32Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:10:40 crc kubenswrapper[4988]: /etc/init.d/functions: line 589: 470 Alarm clock "$@" Nov 23 07:10:40 crc kubenswrapper[4988]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-zcfbn" message=< Nov 23 07:10:40 crc kubenswrapper[4988]: Exiting ovn-controller (1) [FAILED] Nov 23 07:10:40 crc kubenswrapper[4988]: Killing ovn-controller (1) [ OK ] Nov 23 07:10:40 crc kubenswrapper[4988]: Killing ovn-controller (1) with SIGKILL [ OK ] Nov 23 07:10:40 crc kubenswrapper[4988]: 2025-11-23T07:10:32Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:10:40 crc kubenswrapper[4988]: /etc/init.d/functions: line 589: 470 Alarm clock "$@" Nov 23 07:10:40 crc kubenswrapper[4988]: > Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.234510 4988 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 23 07:10:40 crc kubenswrapper[4988]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-23T07:10:32Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:10:40 crc kubenswrapper[4988]: /etc/init.d/functions: line 589: 470 Alarm clock "$@" Nov 23 07:10:40 crc kubenswrapper[4988]: > pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" containerID="cri-o://02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.234673 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-zcfbn" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" containerID="cri-o://02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe" gracePeriod=22 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.241589 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.324563 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.505614 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" path="/var/lib/kubelet/pods/2a28a2bd-cf03-47d7-b142-63b066fdeb42/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.506222 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" path="/var/lib/kubelet/pods/39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.506713 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" path="/var/lib/kubelet/pods/4b6f8f28-b3df-4d34-a898-74f4dc12f201/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.508098 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b82b9a4-6707-446c-abea-2d4a560a43d7" path="/var/lib/kubelet/pods/4b82b9a4-6707-446c-abea-2d4a560a43d7/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.508619 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" path="/var/lib/kubelet/pods/6095bfb4-4519-4c6a-9ded-5f8a0db254c1/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.509531 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612dbf27-0967-4833-a62f-c86a008fe257" path="/var/lib/kubelet/pods/612dbf27-0967-4833-a62f-c86a008fe257/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.510547 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61784d29-67cb-4150-923e-0e819bdde923" path="/var/lib/kubelet/pods/61784d29-67cb-4150-923e-0e819bdde923/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.511329 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" path="/var/lib/kubelet/pods/720d09f3-1104-47a0-93e9-ffb48cf1ae69/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.512660 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" path="/var/lib/kubelet/pods/81f3e05f-502b-4e0a-b5a2-4ab8ec42c410/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.513382 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" path="/var/lib/kubelet/pods/9afe27be-257c-4ea4-84c1-e41a289ad06a/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.515322 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" path="/var/lib/kubelet/pods/9cb35e7c-c792-48c9-8f52-ac3e9cc283f6/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.516404 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" path="/var/lib/kubelet/pods/bd525929-59bb-4b7f-b3a4-12e2e4a03cd4/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.517124 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" path="/var/lib/kubelet/pods/c8d7a2fe-56c9-4c21-98b4-88c12252bbe7/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.517629 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4be2080-1204-4f6e-ac00-bba757695872" path="/var/lib/kubelet/pods/d4be2080-1204-4f6e-ac00-bba757695872/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.518831 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" path="/var/lib/kubelet/pods/ece4b7bd-2b01-4ad3-8782-a4d7341f0b60/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.519289 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" path="/var/lib/kubelet/pods/f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.526786 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" path="/var/lib/kubelet/pods/fbae5c0b-cb91-459a-acb7-e494aedd6d99/volumes" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.555334 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.556044 4988 generic.go:334] "Generic (PLEG): container finished" podID="be021496-c112-4578-bfe4-8639fa51480a" containerID="b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5" exitCode=0 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.556090 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerDied","Data":"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.556151 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be021496-c112-4578-bfe4-8639fa51480a","Type":"ContainerDied","Data":"47b91f5e3569d48f4628c1cba860429e403d05a87989e729f284b37431b94832"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.556243 4988 scope.go:117] "RemoveContainer" containerID="b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.556259 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.578442 4988 generic.go:334] "Generic (PLEG): container finished" podID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerID="901399c306cb37106a8d64f41934670193530a63fe14371cd50172d426d923d6" exitCode=0 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.578541 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerDied","Data":"901399c306cb37106a8d64f41934670193530a63fe14371cd50172d426d923d6"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.578570 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b12d6f8-ea7a-4a60-b459-11563683791d","Type":"ContainerDied","Data":"01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.578583 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01ad1a753187cdbe7823b32a17325dbe6c3d6c6783c3bd1689d04f494855c0fb" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.594324 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2e680706-1677-4f92-9957-9dd477bbc7be/ovn-northd/0.log" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.594435 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.596358 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zcfbn_cdef8d22-1ecf-4086-9506-16378fd96db2/ovn-controller/0.log" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.596401 4988 generic.go:334] "Generic (PLEG): container finished" podID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerID="02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe" exitCode=137 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.596452 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn" event={"ID":"cdef8d22-1ecf-4086-9506-16378fd96db2","Type":"ContainerDied","Data":"02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.598700 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.599989 4988 generic.go:334] "Generic (PLEG): container finished" podID="7bde2362-ff90-47d5-845c-8dfcfe826a61" containerID="562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95" exitCode=0 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.600032 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c99767b4c-cbdj7" event={"ID":"7bde2362-ff90-47d5-845c-8dfcfe826a61","Type":"ContainerDied","Data":"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.600049 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c99767b4c-cbdj7" event={"ID":"7bde2362-ff90-47d5-845c-8dfcfe826a61","Type":"ContainerDied","Data":"86fc2d826b7e686af0952775c9edb0d5758fcf28580139cc5ab2fea25f2186a8"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.600090 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c99767b4c-cbdj7" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.606367 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zcfbn_cdef8d22-1ecf-4086-9506-16378fd96db2/ovn-controller/0.log" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.606443 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.615392 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.616296 4988 scope.go:117] "RemoveContainer" containerID="7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628017 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628071 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628100 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628134 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628153 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628224 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628242 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628272 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628292 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628354 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628478 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff4d8\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8\") pod \"692be1c8-4d8f-4676-89df-19f82b43f043\" (UID: \"692be1c8-4d8f-4676-89df-19f82b43f043\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628622 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.628789 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.629094 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.630657 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.633711 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.633841 4988 generic.go:334] "Generic (PLEG): container finished" podID="692be1c8-4d8f-4676-89df-19f82b43f043" containerID="70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e" exitCode=0 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.633907 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.633955 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerDied","Data":"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.633992 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"692be1c8-4d8f-4676-89df-19f82b43f043","Type":"ContainerDied","Data":"e3ee18358f6c32f13a587c8df838c73b6175f58999a8a156ee52948d983da764"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.636061 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.636161 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8" (OuterVolumeSpecName: "kube-api-access-ff4d8") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "kube-api-access-ff4d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.637407 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2e680706-1677-4f92-9957-9dd477bbc7be/ovn-northd/0.log" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.637448 4988 generic.go:334] "Generic (PLEG): container finished" podID="2e680706-1677-4f92-9957-9dd477bbc7be" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" exitCode=139 Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.637527 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.637519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerDied","Data":"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.637720 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2e680706-1677-4f92-9957-9dd477bbc7be","Type":"ContainerDied","Data":"1ab026f031267b78b55dac80f172695997d786926b8e2ebb91d7af6dd8e3c851"} Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.638278 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.640747 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.643674 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info" (OuterVolumeSpecName: "pod-info") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.676223 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data" (OuterVolumeSpecName: "config-data") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.690037 4988 scope.go:117] "RemoveContainer" containerID="b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.690573 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5\": container with ID starting with b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5 not found: ID does not exist" containerID="b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.690626 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5"} err="failed to get container status \"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5\": rpc error: code = NotFound desc = could not find container \"b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5\": container with ID starting with b1133d5dda9a311b8adb41757a893260ea162a11ad0dc4e90f2111689903dca5 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.690655 4988 scope.go:117] "RemoveContainer" containerID="7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.697129 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747\": container with ID starting with 7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747 not found: ID does not exist" containerID="7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.697182 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747"} err="failed to get container status \"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747\": rpc error: code = NotFound desc = could not find container \"7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747\": container with ID starting with 7ad7f9079d26e9ade45f6b19ee708d636809937df69fe7c81a34904444197747 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.697221 4988 scope.go:117] "RemoveContainer" containerID="562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.723357 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729128 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5c99767b4c-cbdj7"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729534 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729578 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq6nc\" (UniqueName: \"kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729682 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729722 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.729749 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730108 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730146 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730173 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730208 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730282 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730313 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730338 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730359 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730383 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgs4q\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730414 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730432 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730453 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730475 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730495 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730510 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730528 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwj6f\" (UniqueName: \"kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f\") pod \"cdef8d22-1ecf-4086-9506-16378fd96db2\" (UID: \"cdef8d22-1ecf-4086-9506-16378fd96db2\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730546 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730561 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730583 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730600 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0b12d6f8-ea7a-4a60-b459-11563683791d\" (UID: \"0b12d6f8-ea7a-4a60-b459-11563683791d\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730621 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs\") pod \"2e680706-1677-4f92-9957-9dd477bbc7be\" (UID: \"2e680706-1677-4f92-9957-9dd477bbc7be\") " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730871 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730886 4988 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/692be1c8-4d8f-4676-89df-19f82b43f043-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730894 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730905 4988 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/692be1c8-4d8f-4676-89df-19f82b43f043-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730913 4988 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730929 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730938 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.730948 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff4d8\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-kube-api-access-ff4d8\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.750226 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.751118 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.752170 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts" (OuterVolumeSpecName: "scripts") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.752865 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.753603 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.754909 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.755554 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts" (OuterVolumeSpecName: "scripts") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.757992 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf" (OuterVolumeSpecName: "server-conf") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.758175 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f" (OuterVolumeSpecName: "kube-api-access-dwj6f") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "kube-api-access-dwj6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.760152 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run" (OuterVolumeSpecName: "var-run") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.761373 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.769361 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.769409 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.770706 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q" (OuterVolumeSpecName: "kube-api-access-vgs4q") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "kube-api-access-vgs4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.770886 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.771632 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc" (OuterVolumeSpecName: "kube-api-access-lq6nc") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "kube-api-access-lq6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.776474 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config" (OuterVolumeSpecName: "config") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.786386 4988 scope.go:117] "RemoveContainer" containerID="562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.786816 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95\": container with ID starting with 562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95 not found: ID does not exist" containerID="562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.786861 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95"} err="failed to get container status \"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95\": rpc error: code = NotFound desc = could not find container \"562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95\": container with ID starting with 562ac9072d9857de515457d5d7332da167052811ee66b717043f10ceccecef95 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.786899 4988 scope.go:117] "RemoveContainer" containerID="70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.794348 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info" (OuterVolumeSpecName: "pod-info") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.819670 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pp579"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.824711 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data" (OuterVolumeSpecName: "config-data") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.828549 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pp579"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832125 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832154 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832165 4988 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b12d6f8-ea7a-4a60-b459-11563683791d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832177 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgs4q\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-kube-api-access-vgs4q\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832206 4988 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832218 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdef8d22-1ecf-4086-9506-16378fd96db2-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832230 4988 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b12d6f8-ea7a-4a60-b459-11563683791d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832244 4988 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832255 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwj6f\" (UniqueName: \"kubernetes.io/projected/cdef8d22-1ecf-4086-9506-16378fd96db2-kube-api-access-dwj6f\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832265 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832277 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832310 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832322 4988 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832334 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq6nc\" (UniqueName: \"kubernetes.io/projected/2e680706-1677-4f92-9957-9dd477bbc7be-kube-api-access-lq6nc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832345 4988 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/692be1c8-4d8f-4676-89df-19f82b43f043-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832357 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832367 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832378 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832389 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e680706-1677-4f92-9957-9dd477bbc7be-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.832399 4988 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdef8d22-1ecf-4086-9506-16378fd96db2-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.837753 4988 scope.go:117] "RemoveContainer" containerID="ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.839811 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder80d1-account-delete-sq4dt"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.844797 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.846210 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-80d1-account-create-zv6s7"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.852525 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder80d1-account-delete-sq4dt"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.853140 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.857894 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-80d1-account-create-zv6s7"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.863684 4988 scope.go:117] "RemoveContainer" containerID="70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.864556 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf" (OuterVolumeSpecName: "server-conf") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.865036 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e\": container with ID starting with 70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e not found: ID does not exist" containerID="70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.865062 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e"} err="failed to get container status \"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e\": rpc error: code = NotFound desc = could not find container \"70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e\": container with ID starting with 70a3a73f90715f75b25586cbe7cc61357780c34b0e4d4fc127e57f562c1bf01e not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.865081 4988 scope.go:117] "RemoveContainer" containerID="ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.865511 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a\": container with ID starting with ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a not found: ID does not exist" containerID="ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.865532 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a"} err="failed to get container status \"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a\": rpc error: code = NotFound desc = could not find container \"ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a\": container with ID starting with ae07963a3793e221aa5d2e80ad9167671316ecf236cef96d5a7dc3d8c0dfd50a not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.865548 4988 scope.go:117] "RemoveContainer" containerID="5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.897496 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nwt2p"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.909762 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nwt2p"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.913142 4988 scope.go:117] "RemoveContainer" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.914782 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement2c17-account-delete-ps9t4"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.919447 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2c17-account-create-hnw2d"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.926350 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "692be1c8-4d8f-4676-89df-19f82b43f043" (UID: "692be1c8-4d8f-4676-89df-19f82b43f043"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.933504 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2c17-account-create-hnw2d"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.938110 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.938145 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/692be1c8-4d8f-4676-89df-19f82b43f043-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.938157 4988 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b12d6f8-ea7a-4a60-b459-11563683791d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.938166 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.949315 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.951981 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement2c17-account-delete-ps9t4"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.955965 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.962841 4988 scope.go:117] "RemoveContainer" containerID="5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.964809 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749\": container with ID starting with 5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749 not found: ID does not exist" containerID="5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.964847 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749"} err="failed to get container status \"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749\": rpc error: code = NotFound desc = could not find container \"5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749\": container with ID starting with 5bea7b7366768b0b515133fa7efe2ab00fbed9b9f4a4edf2dbb5ed37e52cf749 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.964874 4988 scope.go:117] "RemoveContainer" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" Nov 23 07:10:40 crc kubenswrapper[4988]: E1123 07:10:40.965741 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2\": container with ID starting with a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 not found: ID does not exist" containerID="a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.965786 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2"} err="failed to get container status \"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2\": rpc error: code = NotFound desc = could not find container \"a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2\": container with ID starting with a8805adfc8d4526a5b1f4e92784ea036d98c088c034e09193059aae3a12c56f2 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.981510 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "cdef8d22-1ecf-4086-9506-16378fd96db2" (UID: "cdef8d22-1ecf-4086-9506-16378fd96db2"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.982187 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "2e680706-1677-4f92-9957-9dd477bbc7be" (UID: "2e680706-1677-4f92-9957-9dd477bbc7be"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.989752 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:10:40 crc kubenswrapper[4988]: I1123 07:10:40.994632 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.015632 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0b12d6f8-ea7a-4a60-b459-11563683791d" (UID: "0b12d6f8-ea7a-4a60-b459-11563683791d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.039454 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.039501 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.039518 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b12d6f8-ea7a-4a60-b459-11563683791d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.039626 4988 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e680706-1677-4f92-9957-9dd477bbc7be-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.039645 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdef8d22-1ecf-4086-9506-16378fd96db2-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.095915 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-h48h7"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.112557 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-h48h7"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.121342 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapicb35-account-delete-wsxgk"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.126499 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-cb35-account-create-bcnxl"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.130693 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapicb35-account-delete-wsxgk"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.136484 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-cb35-account-create-bcnxl"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.272412 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.279409 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.654099 4988 generic.go:334] "Generic (PLEG): container finished" podID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" exitCode=0 Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.654257 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89362c9c-bf2d-4e66-8ac3-7b288262b3d8","Type":"ContainerDied","Data":"039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b"} Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.654531 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"89362c9c-bf2d-4e66-8ac3-7b288262b3d8","Type":"ContainerDied","Data":"ddc57bef8c9e208f81125bbac64a7b4441a6cc793a2bf1f5d1750400a17a4753"} Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.654550 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc57bef8c9e208f81125bbac64a7b4441a6cc793a2bf1f5d1750400a17a4753" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.664289 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zcfbn_cdef8d22-1ecf-4086-9506-16378fd96db2/ovn-controller/0.log" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.664434 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.665747 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zcfbn" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.674724 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zcfbn" event={"ID":"cdef8d22-1ecf-4086-9506-16378fd96db2","Type":"ContainerDied","Data":"3a483ab684bc2be58ce75d04f3eea6398f678bb332be2bc009640594e15a7d2b"} Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.674772 4988 scope.go:117] "RemoveContainer" containerID="02b318a143f93f5162fb1a66a0b628ccb66bbcc980bb1b1551d1e97b656766fe" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.675904 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.726266 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.732071 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.737080 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.741674 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zcfbn"] Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.757072 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7wq\" (UniqueName: \"kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq\") pod \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.757126 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle\") pod \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.757156 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data\") pod \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\" (UID: \"89362c9c-bf2d-4e66-8ac3-7b288262b3d8\") " Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.761252 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq" (OuterVolumeSpecName: "kube-api-access-9b7wq") pod "89362c9c-bf2d-4e66-8ac3-7b288262b3d8" (UID: "89362c9c-bf2d-4e66-8ac3-7b288262b3d8"). InnerVolumeSpecName "kube-api-access-9b7wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.792353 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data" (OuterVolumeSpecName: "config-data") pod "89362c9c-bf2d-4e66-8ac3-7b288262b3d8" (UID: "89362c9c-bf2d-4e66-8ac3-7b288262b3d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.806251 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89362c9c-bf2d-4e66-8ac3-7b288262b3d8" (UID: "89362c9c-bf2d-4e66-8ac3-7b288262b3d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.858522 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7wq\" (UniqueName: \"kubernetes.io/projected/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-kube-api-access-9b7wq\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.858550 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:41 crc kubenswrapper[4988]: I1123 07:10:41.858559 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89362c9c-bf2d-4e66-8ac3-7b288262b3d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.506597 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" path="/var/lib/kubelet/pods/09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.507289 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" path="/var/lib/kubelet/pods/0b12d6f8-ea7a-4a60-b459-11563683791d/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.507950 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" path="/var/lib/kubelet/pods/2e680706-1677-4f92-9957-9dd477bbc7be/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.508992 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52da7e45-da4b-4b22-b4d9-de675091c282" path="/var/lib/kubelet/pods/52da7e45-da4b-4b22-b4d9-de675091c282/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.509507 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7" path="/var/lib/kubelet/pods/5e18c0cd-2b02-48f7-a9a9-8e7bacc665e7/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.510509 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ffda4e1-77d8-4d20-a473-ddb6030a3c40" path="/var/lib/kubelet/pods/5ffda4e1-77d8-4d20-a473-ddb6030a3c40/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.511597 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610d9cd6-32c2-4a24-a462-df3c8da3f90f" path="/var/lib/kubelet/pods/610d9cd6-32c2-4a24-a462-df3c8da3f90f/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.512421 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" path="/var/lib/kubelet/pods/692be1c8-4d8f-4676-89df-19f82b43f043/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.513840 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bde2362-ff90-47d5-845c-8dfcfe826a61" path="/var/lib/kubelet/pods/7bde2362-ff90-47d5-845c-8dfcfe826a61/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.515057 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ec8075-1751-43c6-877c-45747d783f30" path="/var/lib/kubelet/pods/97ec8075-1751-43c6-877c-45747d783f30/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.515824 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be021496-c112-4578-bfe4-8639fa51480a" path="/var/lib/kubelet/pods/be021496-c112-4578-bfe4-8639fa51480a/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.516498 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c92c0a02-48da-476e-869f-db5f076076d5" path="/var/lib/kubelet/pods/c92c0a02-48da-476e-869f-db5f076076d5/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.517656 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" path="/var/lib/kubelet/pods/cdef8d22-1ecf-4086-9506-16378fd96db2/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.518276 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d63ecb83-f85e-48fd-b8ab-0f7720422936" path="/var/lib/kubelet/pods/d63ecb83-f85e-48fd-b8ab-0f7720422936/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.519519 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e952929a-89c8-4084-836f-854260a97b3e" path="/var/lib/kubelet/pods/e952929a-89c8-4084-836f-854260a97b3e/volumes" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.635888 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.106:11211: i/o timeout" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.674897 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.701473 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:10:42 crc kubenswrapper[4988]: I1123 07:10:42.708214 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.783148 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.783557 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.783823 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.783860 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.784656 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.787011 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.788857 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:43 crc kubenswrapper[4988]: E1123 07:10:43.788892 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:10:44 crc kubenswrapper[4988]: I1123 07:10:44.509509 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" path="/var/lib/kubelet/pods/89362c9c-bf2d-4e66-8ac3-7b288262b3d8/volumes" Nov 23 07:10:47 crc kubenswrapper[4988]: I1123 07:10:47.993645 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-68dbd6466f-n6f5g" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.161:9696/\": dial tcp 10.217.0.161:9696: connect: connection refused" Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.782971 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.783602 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.784122 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.784161 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.785473 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.787668 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.789174 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:48 crc kubenswrapper[4988]: E1123 07:10:48.789238 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.752292 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753288 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="mysql-bootstrap" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753321 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="mysql-bootstrap" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753348 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753364 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753382 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753400 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753424 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753437 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753458 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753472 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753506 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bde2362-ff90-47d5-845c-8dfcfe826a61" containerName="keystone-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753520 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bde2362-ff90-47d5-845c-8dfcfe826a61" containerName="keystone-api" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753546 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753564 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753582 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753600 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753629 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753642 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753657 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753670 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753683 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610d9cd6-32c2-4a24-a462-df3c8da3f90f" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753696 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="610d9cd6-32c2-4a24-a462-df3c8da3f90f" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753722 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerName="nova-cell0-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753736 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerName="nova-cell0-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753762 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="setup-container" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753776 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="setup-container" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753803 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753816 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753843 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="mysql-bootstrap" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753856 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="mysql-bootstrap" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753874 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753888 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753905 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-central-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753918 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-central-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.753943 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="sg-core" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.753985 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="sg-core" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754004 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerName="kube-state-metrics" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754018 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerName="kube-state-metrics" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754043 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" containerName="nova-cell1-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754056 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" containerName="nova-cell1-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754076 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-notification-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754090 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-notification-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754110 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754123 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-api" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754141 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754154 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754174 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="ovsdbserver-nb" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754187 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="ovsdbserver-nb" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754237 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754249 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754281 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-server" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754299 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-server" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754330 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754343 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-api" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754358 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754371 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754385 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerName="memcached" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754398 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerName="memcached" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754415 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754428 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754449 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754462 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754482 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754495 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754512 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="init" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754525 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="init" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754546 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b82b9a4-6707-446c-abea-2d4a560a43d7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754560 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b82b9a4-6707-446c-abea-2d4a560a43d7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754576 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="ovsdbserver-sb" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754592 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="ovsdbserver-sb" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754616 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754634 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754662 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754680 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754706 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754723 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754752 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afd4c0a-59f2-4313-a198-4e0e8255f163" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754770 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afd4c0a-59f2-4313-a198-4e0e8255f163" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754802 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754815 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754842 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754855 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754877 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754891 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754915 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754928 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754947 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754960 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.754979 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.754992 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755013 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52da7e45-da4b-4b22-b4d9-de675091c282" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755026 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="52da7e45-da4b-4b22-b4d9-de675091c282" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755048 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755061 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755077 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755090 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755117 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755129 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755150 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755164 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755223 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755237 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755263 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755280 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755310 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755324 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755343 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755356 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755388 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="setup-container" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755402 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="setup-container" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755427 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="cinder-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755439 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="cinder-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755454 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="probe" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755466 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="probe" Nov 23 07:10:50 crc kubenswrapper[4988]: E1123 07:10:50.755483 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755495 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755829 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="ovsdbserver-nb" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755860 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-metadata" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755886 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-notification-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755911 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="sg-core" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755932 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d09b31-ee49-498b-bbaf-368e53723f62" containerName="nova-cell1-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755949 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="610d9cd6-32c2-4a24-a462-df3c8da3f90f" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755973 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.755997 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="ceilometer-central-agent" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756018 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756032 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c4a2cd-004d-42ad-bfee-3ec44daff1f1" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756057 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bde2362-ff90-47d5-845c-8dfcfe826a61" containerName="keystone-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756073 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f3e05f-502b-4e0a-b5a2-4ab8ec42c410" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756085 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756108 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d7a2fe-56c9-4c21-98b4-88c12252bbe7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756123 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="ovsdbserver-sb" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756139 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ff5bcf-f9f9-4a1d-93d5-39b0ffe73a72" containerName="nova-cell0-conductor-conductor" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756162 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb35e7c-c792-48c9-8f52-ac3e9cc283f6" containerName="memcached" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756183 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89362c9c-bf2d-4e66-8ac3-7b288262b3d8" containerName="nova-scheduler-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756230 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afd4c0a-59f2-4313-a198-4e0e8255f163" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756252 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b82b9a4-6707-446c-abea-2d4a560a43d7" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756275 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="692be1c8-4d8f-4676-89df-19f82b43f043" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756309 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="832df8ad-6b73-46a8-979f-ec3887c49e83" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756332 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756356 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6095bfb4-4519-4c6a-9ded-5f8a0db254c1" containerName="proxy-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756375 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="cinder-scheduler" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756390 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="52da7e45-da4b-4b22-b4d9-de675091c282" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756412 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756430 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="351d084c-73d8-4965-97c8-407826793cd6" containerName="proxy-server" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756449 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f5e1e6-0051-487f-b9ca-76003e7deed1" containerName="probe" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756481 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756502 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c548ca-78f0-4e91-8a5d-dce756b0421e" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756522 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f2198a-7d70-4447-b8c2-62ac40b5c167" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756551 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbae5c0b-cb91-459a-acb7-e494aedd6d99" containerName="barbican-worker-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756579 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f4abc1-2f6d-4f41-9d49-bd5d4fe5246d" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756605 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756620 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-httpd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756634 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756657 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="720d09f3-1104-47a0-93e9-ffb48cf1ae69" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756677 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece4b7bd-2b01-4ad3-8782-a4d7341f0b60" containerName="mariadb-account-delete" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756694 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756707 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4be2080-1204-4f6e-ac00-bba757695872" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756722 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b12d6f8-ea7a-4a60-b459-11563683791d" containerName="rabbitmq" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756739 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd525929-59bb-4b7f-b3a4-12e2e4a03cd4" containerName="glance-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756758 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="be021496-c112-4578-bfe4-8639fa51480a" containerName="galera" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756775 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="612dbf27-0967-4833-a62f-c86a008fe257" containerName="nova-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756800 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdef8d22-1ecf-4086-9506-16378fd96db2" containerName="ovn-controller" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756819 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afe27be-257c-4ea4-84c1-e41a289ad06a" containerName="barbican-keystone-listener" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756837 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11163ee-e1e7-47a7-a454-610a8b27542f" containerName="openstack-network-exporter" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756854 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e680706-1677-4f92-9957-9dd477bbc7be" containerName="ovn-northd" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756882 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="61784d29-67cb-4150-923e-0e819bdde923" containerName="barbican-api-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756907 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a28a2bd-cf03-47d7-b142-63b066fdeb42" containerName="nova-metadata-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756932 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c60d96-5836-4df5-8fa0-8e7ce2b6d1e3" containerName="kube-state-metrics" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.756959 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6f8f28-b3df-4d34-a898-74f4dc12f201" containerName="placement-log" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.759346 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.763741 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.923154 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.923215 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mqr\" (UniqueName: \"kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:50 crc kubenswrapper[4988]: I1123 07:10:50.923246 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.024129 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.024182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mqr\" (UniqueName: \"kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.024224 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.024749 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.024796 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.045516 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mqr\" (UniqueName: \"kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr\") pod \"certified-operators-f4qnt\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.091528 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.562407 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.672272 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.672343 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.749310 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.785757 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerStarted","Data":"2e965134558721f73b920515e899e6af19c84218885de4317d260c56dce5dd85"} Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.787476 4988 generic.go:334] "Generic (PLEG): container finished" podID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerID="9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6" exitCode=0 Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.787506 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerDied","Data":"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6"} Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.787521 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68dbd6466f-n6f5g" event={"ID":"873f95e0-7013-479e-b8b1-d3cf948d24fe","Type":"ContainerDied","Data":"8e9bd4fcecf6c442f4380daaffd5536973cf90cef33a2a9d14139a5be44e65c9"} Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.787535 4988 scope.go:117] "RemoveContainer" containerID="79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.787643 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68dbd6466f-n6f5g" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.812764 4988 scope.go:117] "RemoveContainer" containerID="9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.834876 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835001 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835177 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835231 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835262 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835313 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwpkt\" (UniqueName: \"kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.835350 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs\") pod \"873f95e0-7013-479e-b8b1-d3cf948d24fe\" (UID: \"873f95e0-7013-479e-b8b1-d3cf948d24fe\") " Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.842453 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt" (OuterVolumeSpecName: "kube-api-access-dwpkt") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "kube-api-access-dwpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.842664 4988 scope.go:117] "RemoveContainer" containerID="79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.842856 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: E1123 07:10:51.843188 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a\": container with ID starting with 79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a not found: ID does not exist" containerID="79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.843242 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a"} err="failed to get container status \"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a\": rpc error: code = NotFound desc = could not find container \"79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a\": container with ID starting with 79cc3b63eb4954fdd3164dbd8f67e619ae0e99a486ebc711e89076c72534f27a not found: ID does not exist" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.843267 4988 scope.go:117] "RemoveContainer" containerID="9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6" Nov 23 07:10:51 crc kubenswrapper[4988]: E1123 07:10:51.843561 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6\": container with ID starting with 9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6 not found: ID does not exist" containerID="9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.843589 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6"} err="failed to get container status \"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6\": rpc error: code = NotFound desc = could not find container \"9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6\": container with ID starting with 9caa054c1ff9da712f2d2241c2cd5015b876811d330da0965259ea926d7bafc6 not found: ID does not exist" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.871400 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config" (OuterVolumeSpecName: "config") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.876899 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.880041 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.888843 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.901490 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "873f95e0-7013-479e-b8b1-d3cf948d24fe" (UID: "873f95e0-7013-479e-b8b1-d3cf948d24fe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936747 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936789 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936798 4988 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936813 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwpkt\" (UniqueName: \"kubernetes.io/projected/873f95e0-7013-479e-b8b1-d3cf948d24fe-kube-api-access-dwpkt\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936822 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936830 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:51 crc kubenswrapper[4988]: I1123 07:10:51.936839 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/873f95e0-7013-479e-b8b1-d3cf948d24fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.132473 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.146425 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68dbd6466f-n6f5g"] Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.362855 4988 scope.go:117] "RemoveContainer" containerID="6cab56cdeb602d70ff9e9c195a76c5f7dd21de9bcc59905fa9189998e4fff53d" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.397931 4988 scope.go:117] "RemoveContainer" containerID="68466f03cc012e80f4cfdb29fa67746fe3b1696571d0bc999ac0bb9f1d9506e5" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.424308 4988 scope.go:117] "RemoveContainer" containerID="26570438aa5de5396fe7cefb58e572a2b24bad93573fe8afe37c0a4d296c6949" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.448107 4988 scope.go:117] "RemoveContainer" containerID="2449f703f7311cf646d0edc484376bd64bc707c2506b087647d3f70f964b9a7b" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.484675 4988 scope.go:117] "RemoveContainer" containerID="96a7d026dbf741ef763cbef53b41d738422ff89aae829eedde4d6c6dc76818d6" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.504434 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" path="/var/lib/kubelet/pods/873f95e0-7013-479e-b8b1-d3cf948d24fe/volumes" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.506519 4988 scope.go:117] "RemoveContainer" containerID="b7f2b31e3788431c900fb58a4fcba23d530014c5b433252244401bbe068b58eb" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.533661 4988 scope.go:117] "RemoveContainer" containerID="ff17dfc095111f52510f65b355dd4871947190d74f7a84b768c2f07965a73d84" Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.801011 4988 generic.go:334] "Generic (PLEG): container finished" podID="2443148e-77cc-47d4-bfc2-60e388d69115" containerID="e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213" exitCode=0 Nov 23 07:10:52 crc kubenswrapper[4988]: I1123 07:10:52.801126 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerDied","Data":"e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213"} Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.782895 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.783841 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.784321 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.784424 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.786426 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.790299 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.793708 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:53 crc kubenswrapper[4988]: E1123 07:10:53.793771 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:10:54 crc kubenswrapper[4988]: I1123 07:10:54.842550 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerStarted","Data":"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0"} Nov 23 07:10:55 crc kubenswrapper[4988]: I1123 07:10:55.861523 4988 generic.go:334] "Generic (PLEG): container finished" podID="2443148e-77cc-47d4-bfc2-60e388d69115" containerID="160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0" exitCode=0 Nov 23 07:10:55 crc kubenswrapper[4988]: I1123 07:10:55.862093 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerDied","Data":"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0"} Nov 23 07:10:56 crc kubenswrapper[4988]: I1123 07:10:56.873184 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerStarted","Data":"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7"} Nov 23 07:10:56 crc kubenswrapper[4988]: I1123 07:10:56.902246 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4qnt" podStartSLOduration=3.428378307 podStartE2EDuration="6.902222911s" podCreationTimestamp="2025-11-23 07:10:50 +0000 UTC" firstStartedPulling="2025-11-23 07:10:52.802958792 +0000 UTC m=+1505.111471555" lastFinishedPulling="2025-11-23 07:10:56.276803356 +0000 UTC m=+1508.585316159" observedRunningTime="2025-11-23 07:10:56.900844377 +0000 UTC m=+1509.209357160" watchObservedRunningTime="2025-11-23 07:10:56.902222911 +0000 UTC m=+1509.210735714" Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.782890 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.783323 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.783819 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.783860 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.784052 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.785683 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.788632 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:10:58 crc kubenswrapper[4988]: E1123 07:10:58.788682 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7xsjx" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:11:01 crc kubenswrapper[4988]: I1123 07:11:01.091727 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:01 crc kubenswrapper[4988]: I1123 07:11:01.092106 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:01 crc kubenswrapper[4988]: I1123 07:11:01.170357 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.023092 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.099762 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.513942 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7xsjx_618fb238-2a5a-4265-9545-9ccbf016f855/ovs-vswitchd/0.log" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.515163 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605273 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605331 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605405 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605470 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnqj\" (UniqueName: \"kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605517 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605607 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605663 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib\") pod \"618fb238-2a5a-4265-9545-9ccbf016f855\" (UID: \"618fb238-2a5a-4265-9545-9ccbf016f855\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.606115 4988 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.605585 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run" (OuterVolumeSpecName: "var-run") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.606671 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts" (OuterVolumeSpecName: "scripts") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.606699 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log" (OuterVolumeSpecName: "var-log") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.606718 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib" (OuterVolumeSpecName: "var-lib") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.617458 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj" (OuterVolumeSpecName: "kube-api-access-gfnqj") pod "618fb238-2a5a-4265-9545-9ccbf016f855" (UID: "618fb238-2a5a-4265-9545-9ccbf016f855"). InnerVolumeSpecName "kube-api-access-gfnqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.694330 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.707605 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnqj\" (UniqueName: \"kubernetes.io/projected/618fb238-2a5a-4265-9545-9ccbf016f855-kube-api-access-gfnqj\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.707645 4988 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-log\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.707662 4988 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-lib\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.707675 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/618fb238-2a5a-4265-9545-9ccbf016f855-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.707692 4988 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/618fb238-2a5a-4265-9545-9ccbf016f855-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.808681 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbtn\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn\") pod \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.809169 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") pod \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.809446 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.809589 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock\") pod \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.809707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache\") pod \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\" (UID: \"fa95668c-09b0-4440-ab49-f1a1b29ebf64\") " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.809931 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock" (OuterVolumeSpecName: "lock") pod "fa95668c-09b0-4440-ab49-f1a1b29ebf64" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.810201 4988 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-lock\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.810501 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache" (OuterVolumeSpecName: "cache") pod "fa95668c-09b0-4440-ab49-f1a1b29ebf64" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.812308 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "fa95668c-09b0-4440-ab49-f1a1b29ebf64" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.812379 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn" (OuterVolumeSpecName: "kube-api-access-vfbtn") pod "fa95668c-09b0-4440-ab49-f1a1b29ebf64" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64"). InnerVolumeSpecName "kube-api-access-vfbtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.812990 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fa95668c-09b0-4440-ab49-f1a1b29ebf64" (UID: "fa95668c-09b0-4440-ab49-f1a1b29ebf64"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.911698 4988 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.911763 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.911777 4988 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fa95668c-09b0-4440-ab49-f1a1b29ebf64-cache\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.911790 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbtn\" (UniqueName: \"kubernetes.io/projected/fa95668c-09b0-4440-ab49-f1a1b29ebf64-kube-api-access-vfbtn\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.935770 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.942014 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7xsjx_618fb238-2a5a-4265-9545-9ccbf016f855/ovs-vswitchd/0.log" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.943612 4988 generic.go:334] "Generic (PLEG): container finished" podID="618fb238-2a5a-4265-9545-9ccbf016f855" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" exitCode=137 Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.943686 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerDied","Data":"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7"} Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.943717 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xsjx" event={"ID":"618fb238-2a5a-4265-9545-9ccbf016f855","Type":"ContainerDied","Data":"6a44b8c53ec4907db963b6534027cc4bc118b9c92141b2bda62677827436fea8"} Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.943738 4988 scope.go:117] "RemoveContainer" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.943739 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xsjx" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.955678 4988 generic.go:334] "Generic (PLEG): container finished" podID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerID="f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4" exitCode=137 Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.955780 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.955807 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4"} Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.955882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fa95668c-09b0-4440-ab49-f1a1b29ebf64","Type":"ContainerDied","Data":"1225812bb325d956d4d318e786e4a0703285c095ba6c8fa7b9eb58b17fc9a7ee"} Nov 23 07:11:02 crc kubenswrapper[4988]: I1123 07:11:02.995371 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.001448 4988 scope.go:117] "RemoveContainer" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.005963 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-7xsjx"] Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.016993 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.026343 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.034426 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.037864 4988 scope.go:117] "RemoveContainer" containerID="23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.074828 4988 scope.go:117] "RemoveContainer" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.075481 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7\": container with ID starting with 3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7 not found: ID does not exist" containerID="3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.075537 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7"} err="failed to get container status \"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7\": rpc error: code = NotFound desc = could not find container \"3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7\": container with ID starting with 3ee44d77862fea4003979e433fad0ba6f375b0bfdc4a78abe92f7b739912f4e7 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.075576 4988 scope.go:117] "RemoveContainer" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.076152 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47\": container with ID starting with 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 not found: ID does not exist" containerID="9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.076614 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47"} err="failed to get container status \"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47\": rpc error: code = NotFound desc = could not find container \"9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47\": container with ID starting with 9a33b42d251e87b4a340dc711d4ac3889a06f1b942764de6dd8c36fba5ec3e47 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.076755 4988 scope.go:117] "RemoveContainer" containerID="23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.077765 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6\": container with ID starting with 23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6 not found: ID does not exist" containerID="23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.077812 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6"} err="failed to get container status \"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6\": rpc error: code = NotFound desc = could not find container \"23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6\": container with ID starting with 23db271bad4c92088f61e34e6a36c5f1e9d112106e80342f29f2dab227690ac6 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.077839 4988 scope.go:117] "RemoveContainer" containerID="f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.106590 4988 scope.go:117] "RemoveContainer" containerID="e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.130292 4988 scope.go:117] "RemoveContainer" containerID="cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.161262 4988 scope.go:117] "RemoveContainer" containerID="a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.185302 4988 scope.go:117] "RemoveContainer" containerID="6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.212410 4988 scope.go:117] "RemoveContainer" containerID="ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.241949 4988 scope.go:117] "RemoveContainer" containerID="44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.270825 4988 scope.go:117] "RemoveContainer" containerID="7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.298040 4988 scope.go:117] "RemoveContainer" containerID="46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.334480 4988 scope.go:117] "RemoveContainer" containerID="b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.362140 4988 scope.go:117] "RemoveContainer" containerID="19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.386446 4988 scope.go:117] "RemoveContainer" containerID="00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.402478 4988 scope.go:117] "RemoveContainer" containerID="bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.428293 4988 scope.go:117] "RemoveContainer" containerID="5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.470242 4988 scope.go:117] "RemoveContainer" containerID="658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.496364 4988 scope.go:117] "RemoveContainer" containerID="f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.496962 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4\": container with ID starting with f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4 not found: ID does not exist" containerID="f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.497025 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4"} err="failed to get container status \"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4\": rpc error: code = NotFound desc = could not find container \"f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4\": container with ID starting with f678e9aafde431e7eba61431ecf752755fad6af9c3006f7444b1986098df1bc4 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.497066 4988 scope.go:117] "RemoveContainer" containerID="e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.498656 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa\": container with ID starting with e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa not found: ID does not exist" containerID="e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.498704 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa"} err="failed to get container status \"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa\": rpc error: code = NotFound desc = could not find container \"e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa\": container with ID starting with e2a289e05893b410eb61b2cd4fbf9b501779fd579c314de0cc6c92f9a6f2baaa not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.498767 4988 scope.go:117] "RemoveContainer" containerID="cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.499325 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e\": container with ID starting with cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e not found: ID does not exist" containerID="cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.499349 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e"} err="failed to get container status \"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e\": rpc error: code = NotFound desc = could not find container \"cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e\": container with ID starting with cfb76b71dc99cf67d08bedaadab5d89ea51d39a6414ff5c3af1bb7d5be5dfe0e not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.499364 4988 scope.go:117] "RemoveContainer" containerID="a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.499662 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764\": container with ID starting with a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764 not found: ID does not exist" containerID="a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.499686 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764"} err="failed to get container status \"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764\": rpc error: code = NotFound desc = could not find container \"a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764\": container with ID starting with a956a7ccad714b8945ffdebf9ff6640cac7fb0d5515fdf1f7242012ace904764 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.499701 4988 scope.go:117] "RemoveContainer" containerID="6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.499976 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474\": container with ID starting with 6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474 not found: ID does not exist" containerID="6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500014 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474"} err="failed to get container status \"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474\": rpc error: code = NotFound desc = could not find container \"6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474\": container with ID starting with 6251e3ca8cbd2958eb671483ce1c1568efe5a256700cb6f55bea1c44976f0474 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500029 4988 scope.go:117] "RemoveContainer" containerID="ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.500263 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c\": container with ID starting with ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c not found: ID does not exist" containerID="ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500283 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c"} err="failed to get container status \"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c\": rpc error: code = NotFound desc = could not find container \"ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c\": container with ID starting with ccc53697bab09da3cf51ccfd80d1bb243a587cb66b83272dc4a880f9aee8076c not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500296 4988 scope.go:117] "RemoveContainer" containerID="44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.500473 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd\": container with ID starting with 44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd not found: ID does not exist" containerID="44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500488 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd"} err="failed to get container status \"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd\": rpc error: code = NotFound desc = could not find container \"44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd\": container with ID starting with 44d7140869016a5d8dc7cc4c6e4ec37f7db6c8d2e144a66ad303a906667f6fbd not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500516 4988 scope.go:117] "RemoveContainer" containerID="7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.500680 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b\": container with ID starting with 7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b not found: ID does not exist" containerID="7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500697 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b"} err="failed to get container status \"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b\": rpc error: code = NotFound desc = could not find container \"7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b\": container with ID starting with 7a38cec98ea32ba686285010990e5b063b86e2030e9faef33903359b0655200b not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500709 4988 scope.go:117] "RemoveContainer" containerID="46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.500872 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc\": container with ID starting with 46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc not found: ID does not exist" containerID="46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500886 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc"} err="failed to get container status \"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc\": rpc error: code = NotFound desc = could not find container \"46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc\": container with ID starting with 46bf3f849e20d56e3e6b467a80d94a24e6c5cbfc45a1b0c351d706cfeb1e7ebc not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.500897 4988 scope.go:117] "RemoveContainer" containerID="b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.501040 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de\": container with ID starting with b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de not found: ID does not exist" containerID="b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501053 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de"} err="failed to get container status \"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de\": rpc error: code = NotFound desc = could not find container \"b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de\": container with ID starting with b0f746264e58ae8d671fe65a1dfc54765d6b636bae1b02c74d7533f60e5062de not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501080 4988 scope.go:117] "RemoveContainer" containerID="19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.501229 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732\": container with ID starting with 19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732 not found: ID does not exist" containerID="19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501246 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732"} err="failed to get container status \"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732\": rpc error: code = NotFound desc = could not find container \"19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732\": container with ID starting with 19880bc3bbebc543742dedde0eed942a028b624974d6f76e6dbe154afd738732 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501257 4988 scope.go:117] "RemoveContainer" containerID="00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.501470 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3\": container with ID starting with 00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3 not found: ID does not exist" containerID="00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501488 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3"} err="failed to get container status \"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3\": rpc error: code = NotFound desc = could not find container \"00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3\": container with ID starting with 00d8eeda2c21993cfbe399dd3cd9798291739b3cacb0407398bf5cbdddf82da3 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501500 4988 scope.go:117] "RemoveContainer" containerID="bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.501747 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614\": container with ID starting with bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614 not found: ID does not exist" containerID="bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501766 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614"} err="failed to get container status \"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614\": rpc error: code = NotFound desc = could not find container \"bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614\": container with ID starting with bd1fbf988dce4c8837556ace8c5d5b2703a7782905229dcf501520dfcb836614 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501779 4988 scope.go:117] "RemoveContainer" containerID="5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.501978 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26\": container with ID starting with 5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26 not found: ID does not exist" containerID="5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.501998 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26"} err="failed to get container status \"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26\": rpc error: code = NotFound desc = could not find container \"5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26\": container with ID starting with 5d2983e5260531ce38573bd2787e8e1e807de5863bb08696dfd2576b6a070c26 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.502009 4988 scope.go:117] "RemoveContainer" containerID="658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5" Nov 23 07:11:03 crc kubenswrapper[4988]: E1123 07:11:03.502257 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5\": container with ID starting with 658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5 not found: ID does not exist" containerID="658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.502289 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5"} err="failed to get container status \"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5\": rpc error: code = NotFound desc = could not find container \"658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5\": container with ID starting with 658f31a5bc11bc847b79e6cbcf86db0fdaf2091b9e63579254127690e4d098e5 not found: ID does not exist" Nov 23 07:11:03 crc kubenswrapper[4988]: I1123 07:11:03.968290 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4qnt" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="registry-server" containerID="cri-o://0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7" gracePeriod=2 Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.388307 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.507817 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" path="/var/lib/kubelet/pods/618fb238-2a5a-4265-9545-9ccbf016f855/volumes" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.508592 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" path="/var/lib/kubelet/pods/fa95668c-09b0-4440-ab49-f1a1b29ebf64/volumes" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.538165 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content\") pod \"2443148e-77cc-47d4-bfc2-60e388d69115\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.538370 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mqr\" (UniqueName: \"kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr\") pod \"2443148e-77cc-47d4-bfc2-60e388d69115\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.539584 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities\") pod \"2443148e-77cc-47d4-bfc2-60e388d69115\" (UID: \"2443148e-77cc-47d4-bfc2-60e388d69115\") " Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.539908 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities" (OuterVolumeSpecName: "utilities") pod "2443148e-77cc-47d4-bfc2-60e388d69115" (UID: "2443148e-77cc-47d4-bfc2-60e388d69115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.540324 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.547412 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr" (OuterVolumeSpecName: "kube-api-access-64mqr") pod "2443148e-77cc-47d4-bfc2-60e388d69115" (UID: "2443148e-77cc-47d4-bfc2-60e388d69115"). InnerVolumeSpecName "kube-api-access-64mqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.605785 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2443148e-77cc-47d4-bfc2-60e388d69115" (UID: "2443148e-77cc-47d4-bfc2-60e388d69115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.642117 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2443148e-77cc-47d4-bfc2-60e388d69115-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.642160 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64mqr\" (UniqueName: \"kubernetes.io/projected/2443148e-77cc-47d4-bfc2-60e388d69115-kube-api-access-64mqr\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.983601 4988 generic.go:334] "Generic (PLEG): container finished" podID="2443148e-77cc-47d4-bfc2-60e388d69115" containerID="0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7" exitCode=0 Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.983679 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerDied","Data":"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7"} Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.983733 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4qnt" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.983765 4988 scope.go:117] "RemoveContainer" containerID="0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7" Nov 23 07:11:04 crc kubenswrapper[4988]: I1123 07:11:04.983748 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4qnt" event={"ID":"2443148e-77cc-47d4-bfc2-60e388d69115","Type":"ContainerDied","Data":"2e965134558721f73b920515e899e6af19c84218885de4317d260c56dce5dd85"} Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.015683 4988 scope.go:117] "RemoveContainer" containerID="160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.029415 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.035568 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4qnt"] Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.062872 4988 scope.go:117] "RemoveContainer" containerID="e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.094670 4988 scope.go:117] "RemoveContainer" containerID="0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7" Nov 23 07:11:05 crc kubenswrapper[4988]: E1123 07:11:05.095348 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7\": container with ID starting with 0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7 not found: ID does not exist" containerID="0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.095409 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7"} err="failed to get container status \"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7\": rpc error: code = NotFound desc = could not find container \"0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7\": container with ID starting with 0800d7b51f43a5b8ea95e97da6f010ecc077154563107328c0e5def9b59b8af7 not found: ID does not exist" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.095450 4988 scope.go:117] "RemoveContainer" containerID="160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0" Nov 23 07:11:05 crc kubenswrapper[4988]: E1123 07:11:05.095863 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0\": container with ID starting with 160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0 not found: ID does not exist" containerID="160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.095899 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0"} err="failed to get container status \"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0\": rpc error: code = NotFound desc = could not find container \"160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0\": container with ID starting with 160fc364676e3a10a2f30b0448993244bb6e960ba15571928fc7e06a23276da0 not found: ID does not exist" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.095929 4988 scope.go:117] "RemoveContainer" containerID="e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213" Nov 23 07:11:05 crc kubenswrapper[4988]: E1123 07:11:05.096493 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213\": container with ID starting with e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213 not found: ID does not exist" containerID="e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213" Nov 23 07:11:05 crc kubenswrapper[4988]: I1123 07:11:05.096535 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213"} err="failed to get container status \"e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213\": rpc error: code = NotFound desc = could not find container \"e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213\": container with ID starting with e19598b970eef6a95e29f66ff79350c6a34beb1c351e7b26e138b9d805f3f213 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[4988]: I1123 07:11:06.505857 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" path="/var/lib/kubelet/pods/2443148e-77cc-47d4-bfc2-60e388d69115/volumes" Nov 23 07:11:21 crc kubenswrapper[4988]: I1123 07:11:21.672757 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:11:21 crc kubenswrapper[4988]: I1123 07:11:21.673484 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.262858 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.263925 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.263951 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.263971 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.263983 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.263997 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264008 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264023 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="rsync" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264034 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="rsync" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264053 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264065 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-server" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264084 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264095 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264123 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-api" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264135 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-api" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264154 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264164 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264188 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264231 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264248 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264260 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-server" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264283 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="swift-recon-cron" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264293 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="swift-recon-cron" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264307 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="registry-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264317 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="registry-server" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264334 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-httpd" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264346 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-httpd" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264368 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="extract-content" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264378 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="extract-content" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264397 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264407 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264616 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-reaper" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264626 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-reaper" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264646 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264656 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-server" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264674 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="extract-utilities" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264685 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="extract-utilities" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264702 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264713 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264727 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264739 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264752 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264762 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264778 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-expirer" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264788 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-expirer" Nov 23 07:11:37 crc kubenswrapper[4988]: E1123 07:11:37.264807 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server-init" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.264817 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server-init" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265046 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="rsync" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265068 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-reaper" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265087 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovsdb-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265107 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265123 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265144 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265160 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265178 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-replicator" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265215 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="container-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265230 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-api" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265243 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="873f95e0-7013-479e-b8b1-d3cf948d24fe" containerName="neutron-httpd" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265262 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-expirer" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265280 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2443148e-77cc-47d4-bfc2-60e388d69115" containerName="registry-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265298 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265316 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="account-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265327 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-updater" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265340 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="swift-recon-cron" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265354 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-server" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265368 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa95668c-09b0-4440-ab49-f1a1b29ebf64" containerName="object-auditor" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.265387 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="618fb238-2a5a-4265-9545-9ccbf016f855" containerName="ovs-vswitchd" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.267006 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.278501 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.371083 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clp6n\" (UniqueName: \"kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.371588 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.371709 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.473445 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp6n\" (UniqueName: \"kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.473548 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.473676 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.474521 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.475953 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.495670 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp6n\" (UniqueName: \"kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n\") pod \"community-operators-4hsbt\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:37 crc kubenswrapper[4988]: I1123 07:11:37.595467 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:38 crc kubenswrapper[4988]: I1123 07:11:38.137153 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:38 crc kubenswrapper[4988]: I1123 07:11:38.366797 4988 generic.go:334] "Generic (PLEG): container finished" podID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerID="631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12" exitCode=0 Nov 23 07:11:38 crc kubenswrapper[4988]: I1123 07:11:38.367463 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerDied","Data":"631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12"} Nov 23 07:11:38 crc kubenswrapper[4988]: I1123 07:11:38.367578 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerStarted","Data":"57647702a49e72c5d73cae6bd8beec7977ae12054fae065348787f8b8f7bf64b"} Nov 23 07:11:39 crc kubenswrapper[4988]: I1123 07:11:39.380049 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerStarted","Data":"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74"} Nov 23 07:11:40 crc kubenswrapper[4988]: I1123 07:11:40.389044 4988 generic.go:334] "Generic (PLEG): container finished" podID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerID="a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74" exitCode=0 Nov 23 07:11:40 crc kubenswrapper[4988]: I1123 07:11:40.389162 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerDied","Data":"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74"} Nov 23 07:11:41 crc kubenswrapper[4988]: I1123 07:11:41.400703 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerStarted","Data":"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c"} Nov 23 07:11:41 crc kubenswrapper[4988]: I1123 07:11:41.426170 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hsbt" podStartSLOduration=1.9276388359999999 podStartE2EDuration="4.426143749s" podCreationTimestamp="2025-11-23 07:11:37 +0000 UTC" firstStartedPulling="2025-11-23 07:11:38.369544718 +0000 UTC m=+1550.678057471" lastFinishedPulling="2025-11-23 07:11:40.868049611 +0000 UTC m=+1553.176562384" observedRunningTime="2025-11-23 07:11:41.41918337 +0000 UTC m=+1553.727696203" watchObservedRunningTime="2025-11-23 07:11:41.426143749 +0000 UTC m=+1553.734656552" Nov 23 07:11:47 crc kubenswrapper[4988]: I1123 07:11:47.596149 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:47 crc kubenswrapper[4988]: I1123 07:11:47.598711 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:47 crc kubenswrapper[4988]: I1123 07:11:47.653243 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:48 crc kubenswrapper[4988]: I1123 07:11:48.552383 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:48 crc kubenswrapper[4988]: I1123 07:11:48.606052 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:50 crc kubenswrapper[4988]: I1123 07:11:50.507380 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hsbt" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="registry-server" containerID="cri-o://4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c" gracePeriod=2 Nov 23 07:11:51 crc kubenswrapper[4988]: I1123 07:11:51.671889 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:11:51 crc kubenswrapper[4988]: I1123 07:11:51.672179 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:11:51 crc kubenswrapper[4988]: I1123 07:11:51.672273 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:11:51 crc kubenswrapper[4988]: I1123 07:11:51.672885 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:11:51 crc kubenswrapper[4988]: I1123 07:11:51.672929 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" gracePeriod=600 Nov 23 07:11:51 crc kubenswrapper[4988]: E1123 07:11:51.807814 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.165440 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.304207 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clp6n\" (UniqueName: \"kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n\") pod \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.304309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities\") pod \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.304331 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content\") pod \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\" (UID: \"6837dbf8-86e9-4b81-876f-8085f5d62e9d\") " Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.305289 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities" (OuterVolumeSpecName: "utilities") pod "6837dbf8-86e9-4b81-876f-8085f5d62e9d" (UID: "6837dbf8-86e9-4b81-876f-8085f5d62e9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.335463 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n" (OuterVolumeSpecName: "kube-api-access-clp6n") pod "6837dbf8-86e9-4b81-876f-8085f5d62e9d" (UID: "6837dbf8-86e9-4b81-876f-8085f5d62e9d"). InnerVolumeSpecName "kube-api-access-clp6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.393492 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6837dbf8-86e9-4b81-876f-8085f5d62e9d" (UID: "6837dbf8-86e9-4b81-876f-8085f5d62e9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.405699 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clp6n\" (UniqueName: \"kubernetes.io/projected/6837dbf8-86e9-4b81-876f-8085f5d62e9d-kube-api-access-clp6n\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.405726 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.405736 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6837dbf8-86e9-4b81-876f-8085f5d62e9d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.531288 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" exitCode=0 Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.531349 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867"} Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.531469 4988 scope.go:117] "RemoveContainer" containerID="51af3eda6050dc8d062c5878f4b042de917b8197626fc2ae6d794dfa7ecf4da9" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.532020 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:11:52 crc kubenswrapper[4988]: E1123 07:11:52.532316 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.534514 4988 generic.go:334] "Generic (PLEG): container finished" podID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerID="4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c" exitCode=0 Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.534616 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerDied","Data":"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c"} Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.534648 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hsbt" event={"ID":"6837dbf8-86e9-4b81-876f-8085f5d62e9d","Type":"ContainerDied","Data":"57647702a49e72c5d73cae6bd8beec7977ae12054fae065348787f8b8f7bf64b"} Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.534666 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hsbt" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.580852 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.589214 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hsbt"] Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.600870 4988 scope.go:117] "RemoveContainer" containerID="4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.623120 4988 scope.go:117] "RemoveContainer" containerID="a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.646338 4988 scope.go:117] "RemoveContainer" containerID="631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.667730 4988 scope.go:117] "RemoveContainer" containerID="4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c" Nov 23 07:11:52 crc kubenswrapper[4988]: E1123 07:11:52.668106 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c\": container with ID starting with 4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c not found: ID does not exist" containerID="4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.668132 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c"} err="failed to get container status \"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c\": rpc error: code = NotFound desc = could not find container \"4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c\": container with ID starting with 4083bb2367808bcdd4387ef92f50ede5868b9b186b8e21c09641ae55f4b5163c not found: ID does not exist" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.668153 4988 scope.go:117] "RemoveContainer" containerID="a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74" Nov 23 07:11:52 crc kubenswrapper[4988]: E1123 07:11:52.668407 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74\": container with ID starting with a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74 not found: ID does not exist" containerID="a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.668427 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74"} err="failed to get container status \"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74\": rpc error: code = NotFound desc = could not find container \"a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74\": container with ID starting with a84597312d2466fd91de8322a027432db94f94d7b51fb765ef8f45fe740eba74 not found: ID does not exist" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.668440 4988 scope.go:117] "RemoveContainer" containerID="631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12" Nov 23 07:11:52 crc kubenswrapper[4988]: E1123 07:11:52.668733 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12\": container with ID starting with 631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12 not found: ID does not exist" containerID="631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.668804 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12"} err="failed to get container status \"631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12\": rpc error: code = NotFound desc = could not find container \"631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12\": container with ID starting with 631fb37cf9fccd53610d10d250443ab5ebc86d125a6298366bc38c41f5134a12 not found: ID does not exist" Nov 23 07:11:52 crc kubenswrapper[4988]: I1123 07:11:52.976552 4988 scope.go:117] "RemoveContainer" containerID="09b36958e7a38d8a2f724ab11c056ab39250d89750c4214925cacd2e136807f4" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.018365 4988 scope.go:117] "RemoveContainer" containerID="7559055fd2a379a7c8e479779ac08cbaa47217ff7c5e0fcb81d6ed7afd8c720d" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.057437 4988 scope.go:117] "RemoveContainer" containerID="122cf31e8ed2fa64dd2c7f0e176a961c7c365aa2f1e3596640dcafa8dea570a5" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.092459 4988 scope.go:117] "RemoveContainer" containerID="bec3e2825a178d1de93d5f5b8e2177dde9448344df03e6ee7fec9be9cc6caaf8" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.122766 4988 scope.go:117] "RemoveContainer" containerID="f11be766db4504f2e41951963bad97f99438e4e69daf034d7d2d0af098154be1" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.149983 4988 scope.go:117] "RemoveContainer" containerID="bc3c8ae80a13eff76ff5d7ab18132107c5fb43da4eae70ae1c79f0b4e86a1fea" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.179456 4988 scope.go:117] "RemoveContainer" containerID="f52af94e44bc73b049668b4f770488d00f378865dd3f3636885a12163eaae0ff" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.208263 4988 scope.go:117] "RemoveContainer" containerID="f5db40ede098b182652c10d4de12c61dbf4706d68b094c3825e1fa0b35f1889a" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.258576 4988 scope.go:117] "RemoveContainer" containerID="3a5bcd7c039d4e1e5b28798b930d0287d5c82a6c21ba4e1db683f177d84a0bea" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.290180 4988 scope.go:117] "RemoveContainer" containerID="4e8c539cafce927fa0051ee822d18f425adf13d24727826db100e93d78fe2053" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.323540 4988 scope.go:117] "RemoveContainer" containerID="248903e7765cb842974618efa644386c4868c9ee7d086c147a4a6149357f2e64" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.354557 4988 scope.go:117] "RemoveContainer" containerID="8cafe093f4c2b0f52906756ce4dcc579d9b0c31f9ae3dde6f9d8c8cd94230afc" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.382898 4988 scope.go:117] "RemoveContainer" containerID="236952f66862d458af7dafe2a87ed0e4db0e53afe9f61e5deedd1956006ce87c" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.419773 4988 scope.go:117] "RemoveContainer" containerID="8143de56d9d4acbbde0e68754c31ea04f739ded08fe6ab5b6d541736e19f1585" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.452725 4988 scope.go:117] "RemoveContainer" containerID="7b846d3e72b93645c0f916a10993b0f93bb92fc366104b751731cee174aecd6f" Nov 23 07:11:53 crc kubenswrapper[4988]: I1123 07:11:53.485743 4988 scope.go:117] "RemoveContainer" containerID="901399c306cb37106a8d64f41934670193530a63fe14371cd50172d426d923d6" Nov 23 07:11:54 crc kubenswrapper[4988]: I1123 07:11:54.507251 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" path="/var/lib/kubelet/pods/6837dbf8-86e9-4b81-876f-8085f5d62e9d/volumes" Nov 23 07:12:05 crc kubenswrapper[4988]: I1123 07:12:05.496437 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:12:05 crc kubenswrapper[4988]: E1123 07:12:05.497223 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:12:20 crc kubenswrapper[4988]: I1123 07:12:20.496106 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:12:20 crc kubenswrapper[4988]: E1123 07:12:20.497065 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:12:32 crc kubenswrapper[4988]: I1123 07:12:32.495838 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:12:32 crc kubenswrapper[4988]: E1123 07:12:32.496664 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:12:44 crc kubenswrapper[4988]: I1123 07:12:44.496915 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:12:44 crc kubenswrapper[4988]: E1123 07:12:44.498070 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:12:53 crc kubenswrapper[4988]: I1123 07:12:53.887878 4988 scope.go:117] "RemoveContainer" containerID="e747451ac1bf2e06bbd8441d5880d4818f7807bd9c7f5bc06bd3838c179ebeab" Nov 23 07:12:53 crc kubenswrapper[4988]: I1123 07:12:53.930708 4988 scope.go:117] "RemoveContainer" containerID="17fdcf6c95d00108492d6dacd204a2b7d7c6e0b1bc2c771436f7b091577c80bb" Nov 23 07:12:53 crc kubenswrapper[4988]: I1123 07:12:53.985734 4988 scope.go:117] "RemoveContainer" containerID="cca8075ca5605eaf238b8df0600c890e2ed6f61bb9da40f8c766eba7e805c422" Nov 23 07:12:54 crc kubenswrapper[4988]: I1123 07:12:54.017620 4988 scope.go:117] "RemoveContainer" containerID="04443b6b592714f8daae5298a37741922db73aa1d1ed82ef92859ab5f2626a5d" Nov 23 07:12:54 crc kubenswrapper[4988]: I1123 07:12:54.060333 4988 scope.go:117] "RemoveContainer" containerID="facd8c1229becea5d1702a120f64f08601bb811e403d35249c0df6437c57c708" Nov 23 07:12:55 crc kubenswrapper[4988]: I1123 07:12:55.496777 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:12:55 crc kubenswrapper[4988]: E1123 07:12:55.497374 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:13:08 crc kubenswrapper[4988]: I1123 07:13:08.503238 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:13:08 crc kubenswrapper[4988]: E1123 07:13:08.504125 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:13:19 crc kubenswrapper[4988]: I1123 07:13:19.496161 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:13:19 crc kubenswrapper[4988]: E1123 07:13:19.497504 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:13:30 crc kubenswrapper[4988]: I1123 07:13:30.496900 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:13:30 crc kubenswrapper[4988]: E1123 07:13:30.497787 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:13:43 crc kubenswrapper[4988]: I1123 07:13:43.496695 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:13:43 crc kubenswrapper[4988]: E1123 07:13:43.497981 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.170383 4988 scope.go:117] "RemoveContainer" containerID="91c89292a316155edc0bb33717dfbfd97559a61804e616f57f0be6815c8aa1f1" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.209563 4988 scope.go:117] "RemoveContainer" containerID="d4c96fa15b5d2aa8b6e8be74287805d1cd138dd10074569c35fd38198b992b42" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.286919 4988 scope.go:117] "RemoveContainer" containerID="d9da8641474615fa974f048f1a86218e5baa107275dcf3f618a64368d2e3fd27" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.316841 4988 scope.go:117] "RemoveContainer" containerID="34e7480000062695ebf573eedfae499c0ceef26baec215a617dadbdff949c05f" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.353919 4988 scope.go:117] "RemoveContainer" containerID="f1350fd41c641245697cbffd16b5a000c909ca100d07d3c04f67ef7da811ed50" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.382838 4988 scope.go:117] "RemoveContainer" containerID="43836a955a8fe66876f7b7225a37522438dc7969421767e3ebb576fee4896ce2" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.432478 4988 scope.go:117] "RemoveContainer" containerID="fcdbbdc0a41476772de1268178d0e9f8aee3aaee637572f98bab9a0697ebd658" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.462945 4988 scope.go:117] "RemoveContainer" containerID="6c41f5805b7efdab5e67edf42677c7e686ae567b5ac0613407643871e1d427b1" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.489918 4988 scope.go:117] "RemoveContainer" containerID="922da97bc4cd2da2f4e745eb80e83dad312f6fcac0caca62e021bf8c20c674e1" Nov 23 07:13:54 crc kubenswrapper[4988]: I1123 07:13:54.516762 4988 scope.go:117] "RemoveContainer" containerID="dd7bc2fffd6371144890721b68d24739d43a19a086794cc91ba4e0f60f9016ab" Nov 23 07:13:56 crc kubenswrapper[4988]: I1123 07:13:56.497416 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:13:56 crc kubenswrapper[4988]: E1123 07:13:56.498492 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:14:10 crc kubenswrapper[4988]: I1123 07:14:10.496423 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:14:10 crc kubenswrapper[4988]: E1123 07:14:10.497297 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:14:24 crc kubenswrapper[4988]: I1123 07:14:24.496781 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:14:24 crc kubenswrapper[4988]: E1123 07:14:24.497643 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:14:37 crc kubenswrapper[4988]: I1123 07:14:37.496593 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:14:37 crc kubenswrapper[4988]: E1123 07:14:37.497611 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:14:51 crc kubenswrapper[4988]: I1123 07:14:51.496599 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:14:51 crc kubenswrapper[4988]: E1123 07:14:51.497674 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.732793 4988 scope.go:117] "RemoveContainer" containerID="7d023e152f127851ebcc0816fd9a68deedbc0ac220f32efd3e76af9c66c576c7" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.772036 4988 scope.go:117] "RemoveContainer" containerID="618d25672c5c0f17710274816f0723200e9f520a3065740bac8588506051f661" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.836975 4988 scope.go:117] "RemoveContainer" containerID="8e3c9d2fec29812fc7ace8e08e11d94ae904a84c512878a8526091d5bb35f819" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.861935 4988 scope.go:117] "RemoveContainer" containerID="5a4cefe36ba5167adbe690485fbdf4eaf246ac99bc536213997c576d5e2fb37d" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.884010 4988 scope.go:117] "RemoveContainer" containerID="b50b4687de8a67b2b06a7b57e4fc5944ae67fded9f5bd1f3da40846a23172fff" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.909895 4988 scope.go:117] "RemoveContainer" containerID="5379910b48fc1ebdae2f3fa9f537dcf35f365339c391bfe25023c2fec1e40b0c" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.935405 4988 scope.go:117] "RemoveContainer" containerID="006fffb78e949582925d310e274095954e034e6bdeb41702e176c21e72464298" Nov 23 07:14:54 crc kubenswrapper[4988]: I1123 07:14:54.976843 4988 scope.go:117] "RemoveContainer" containerID="4e6fe2366cf0433936682d32ab254792a68eff37687d4d89181ec0d51fed8967" Nov 23 07:14:55 crc kubenswrapper[4988]: I1123 07:14:55.002323 4988 scope.go:117] "RemoveContainer" containerID="63f613a721d537ea4a6ffd787b180d4d5ff30b9232cb7ac04da6abb7c20801b3" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.163303 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm"] Nov 23 07:15:00 crc kubenswrapper[4988]: E1123 07:15:00.163985 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="registry-server" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.164000 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="registry-server" Nov 23 07:15:00 crc kubenswrapper[4988]: E1123 07:15:00.164014 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="extract-utilities" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.164022 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="extract-utilities" Nov 23 07:15:00 crc kubenswrapper[4988]: E1123 07:15:00.164049 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="extract-content" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.164060 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="extract-content" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.164306 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6837dbf8-86e9-4b81-876f-8085f5d62e9d" containerName="registry-server" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.164918 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.174760 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.174841 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.178893 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm"] Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.289187 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.289355 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.289438 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcs2\" (UniqueName: \"kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.390406 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.390470 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.390508 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcs2\" (UniqueName: \"kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.393605 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.400977 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.409741 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcs2\" (UniqueName: \"kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2\") pod \"collect-profiles-29398035-lmbqm\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.491668 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:00 crc kubenswrapper[4988]: I1123 07:15:00.955013 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm"] Nov 23 07:15:01 crc kubenswrapper[4988]: I1123 07:15:01.477492 4988 generic.go:334] "Generic (PLEG): container finished" podID="b378727f-a7df-4930-b38a-e77353e67097" containerID="af1d2ab79f46c8e3b9964b808fdcaae9ad163d5b1806259e11e3024ad1fa999a" exitCode=0 Nov 23 07:15:01 crc kubenswrapper[4988]: I1123 07:15:01.477561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" event={"ID":"b378727f-a7df-4930-b38a-e77353e67097","Type":"ContainerDied","Data":"af1d2ab79f46c8e3b9964b808fdcaae9ad163d5b1806259e11e3024ad1fa999a"} Nov 23 07:15:01 crc kubenswrapper[4988]: I1123 07:15:01.477948 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" event={"ID":"b378727f-a7df-4930-b38a-e77353e67097","Type":"ContainerStarted","Data":"9ecd6094c039df9e3770d289d6036ad239e154e6605ecfdcfe33c2ec04fc014d"} Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.753922 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.927501 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume\") pod \"b378727f-a7df-4930-b38a-e77353e67097\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.928330 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume" (OuterVolumeSpecName: "config-volume") pod "b378727f-a7df-4930-b38a-e77353e67097" (UID: "b378727f-a7df-4930-b38a-e77353e67097"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.928576 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume\") pod \"b378727f-a7df-4930-b38a-e77353e67097\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.928600 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kcs2\" (UniqueName: \"kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2\") pod \"b378727f-a7df-4930-b38a-e77353e67097\" (UID: \"b378727f-a7df-4930-b38a-e77353e67097\") " Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.928844 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b378727f-a7df-4930-b38a-e77353e67097-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.935533 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b378727f-a7df-4930-b38a-e77353e67097" (UID: "b378727f-a7df-4930-b38a-e77353e67097"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[4988]: I1123 07:15:02.936266 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2" (OuterVolumeSpecName: "kube-api-access-5kcs2") pod "b378727f-a7df-4930-b38a-e77353e67097" (UID: "b378727f-a7df-4930-b38a-e77353e67097"). InnerVolumeSpecName "kube-api-access-5kcs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:15:03 crc kubenswrapper[4988]: I1123 07:15:03.030222 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b378727f-a7df-4930-b38a-e77353e67097-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:03 crc kubenswrapper[4988]: I1123 07:15:03.030263 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kcs2\" (UniqueName: \"kubernetes.io/projected/b378727f-a7df-4930-b38a-e77353e67097-kube-api-access-5kcs2\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:03 crc kubenswrapper[4988]: I1123 07:15:03.498888 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" event={"ID":"b378727f-a7df-4930-b38a-e77353e67097","Type":"ContainerDied","Data":"9ecd6094c039df9e3770d289d6036ad239e154e6605ecfdcfe33c2ec04fc014d"} Nov 23 07:15:03 crc kubenswrapper[4988]: I1123 07:15:03.498933 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm" Nov 23 07:15:03 crc kubenswrapper[4988]: I1123 07:15:03.498967 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecd6094c039df9e3770d289d6036ad239e154e6605ecfdcfe33c2ec04fc014d" Nov 23 07:15:04 crc kubenswrapper[4988]: I1123 07:15:04.496680 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:15:04 crc kubenswrapper[4988]: E1123 07:15:04.497474 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:15:15 crc kubenswrapper[4988]: I1123 07:15:15.496609 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:15:15 crc kubenswrapper[4988]: E1123 07:15:15.497560 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:15:26 crc kubenswrapper[4988]: I1123 07:15:26.496681 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:15:26 crc kubenswrapper[4988]: E1123 07:15:26.497677 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:15:39 crc kubenswrapper[4988]: I1123 07:15:39.496016 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:15:39 crc kubenswrapper[4988]: E1123 07:15:39.497336 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:15:54 crc kubenswrapper[4988]: I1123 07:15:54.498735 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:15:54 crc kubenswrapper[4988]: E1123 07:15:54.499845 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:15:55 crc kubenswrapper[4988]: I1123 07:15:55.094882 4988 scope.go:117] "RemoveContainer" containerID="09b29081da7241818cdcc74db9b8d720eb74975763b6c6d467e42525454be55b" Nov 23 07:15:55 crc kubenswrapper[4988]: I1123 07:15:55.125139 4988 scope.go:117] "RemoveContainer" containerID="d7a0beeffd4bb592aa4fed3908d8d8d213c8a5810078bc9828345957f014c04e" Nov 23 07:15:55 crc kubenswrapper[4988]: I1123 07:15:55.191300 4988 scope.go:117] "RemoveContainer" containerID="039436a09e2703b336d48b9a5d01f8f637d2ba7536581a357fff396f3fa3571b" Nov 23 07:16:06 crc kubenswrapper[4988]: I1123 07:16:06.496873 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:16:06 crc kubenswrapper[4988]: E1123 07:16:06.497439 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:16:17 crc kubenswrapper[4988]: I1123 07:16:17.497382 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:16:17 crc kubenswrapper[4988]: E1123 07:16:17.498726 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:16:29 crc kubenswrapper[4988]: I1123 07:16:29.495876 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:16:29 crc kubenswrapper[4988]: E1123 07:16:29.496847 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:16:42 crc kubenswrapper[4988]: I1123 07:16:42.498022 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:16:42 crc kubenswrapper[4988]: E1123 07:16:42.499113 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.262516 4988 scope.go:117] "RemoveContainer" containerID="3a076926a690b3b95246b9e710f1d3840e5599a0de28fa8b87053fb5acac3d3f" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.283534 4988 scope.go:117] "RemoveContainer" containerID="51b62861a02a883b10cbd3bcd5bbbcaf41aa233a576f99e4a22007528ed5f304" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.321118 4988 scope.go:117] "RemoveContainer" containerID="763109e37e768ab5ddf0aa50bd585326267053ebe8bcb0bd2dabb1628e7c8658" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.340845 4988 scope.go:117] "RemoveContainer" containerID="66c15cd32a49446bf787535a83ba95c1099387914c033c58e0a73521f56571b7" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.364504 4988 scope.go:117] "RemoveContainer" containerID="afb06f3bee648e472f785867007795ba2de84b2e773e3bec1d5d3afef328edb9" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.388009 4988 scope.go:117] "RemoveContainer" containerID="ddf5e557a856c1e8720639a34e6349eb693284d794748d6356d59794c5f7cb6d" Nov 23 07:16:55 crc kubenswrapper[4988]: I1123 07:16:55.415818 4988 scope.go:117] "RemoveContainer" containerID="01925025abbcd7f5c67405e82f0541a012e5128834a354abd5bed9a4d07eeebd" Nov 23 07:16:56 crc kubenswrapper[4988]: I1123 07:16:56.496921 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:16:57 crc kubenswrapper[4988]: I1123 07:16:57.619291 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10"} Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.702344 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:18:50 crc kubenswrapper[4988]: E1123 07:18:50.703276 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b378727f-a7df-4930-b38a-e77353e67097" containerName="collect-profiles" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.703295 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b378727f-a7df-4930-b38a-e77353e67097" containerName="collect-profiles" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.703493 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b378727f-a7df-4930-b38a-e77353e67097" containerName="collect-profiles" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.704728 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.721082 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.724783 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.724826 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.725035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7m2q\" (UniqueName: \"kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.826357 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7m2q\" (UniqueName: \"kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.826642 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.826745 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.827170 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.827318 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:50 crc kubenswrapper[4988]: I1123 07:18:50.856151 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7m2q\" (UniqueName: \"kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q\") pod \"redhat-operators-9k2n5\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:51 crc kubenswrapper[4988]: I1123 07:18:51.028374 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:18:51 crc kubenswrapper[4988]: I1123 07:18:51.501259 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:18:51 crc kubenswrapper[4988]: W1123 07:18:51.502394 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode340b553_a369_4526_83c9_dbfacdd7e928.slice/crio-de60277c5061d6b23568b15d257fe88dcb8ae3051a1a25248f621a5fa025213f WatchSource:0}: Error finding container de60277c5061d6b23568b15d257fe88dcb8ae3051a1a25248f621a5fa025213f: Status 404 returned error can't find the container with id de60277c5061d6b23568b15d257fe88dcb8ae3051a1a25248f621a5fa025213f Nov 23 07:18:51 crc kubenswrapper[4988]: I1123 07:18:51.678754 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerStarted","Data":"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e"} Nov 23 07:18:51 crc kubenswrapper[4988]: I1123 07:18:51.678992 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerStarted","Data":"de60277c5061d6b23568b15d257fe88dcb8ae3051a1a25248f621a5fa025213f"} Nov 23 07:18:51 crc kubenswrapper[4988]: I1123 07:18:51.683001 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:18:52 crc kubenswrapper[4988]: I1123 07:18:52.690930 4988 generic.go:334] "Generic (PLEG): container finished" podID="e340b553-a369-4526-83c9-dbfacdd7e928" containerID="57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e" exitCode=0 Nov 23 07:18:52 crc kubenswrapper[4988]: I1123 07:18:52.690986 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerDied","Data":"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e"} Nov 23 07:18:52 crc kubenswrapper[4988]: I1123 07:18:52.691526 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerStarted","Data":"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2"} Nov 23 07:18:53 crc kubenswrapper[4988]: I1123 07:18:53.702856 4988 generic.go:334] "Generic (PLEG): container finished" podID="e340b553-a369-4526-83c9-dbfacdd7e928" containerID="54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2" exitCode=0 Nov 23 07:18:53 crc kubenswrapper[4988]: I1123 07:18:53.702934 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerDied","Data":"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2"} Nov 23 07:18:54 crc kubenswrapper[4988]: I1123 07:18:54.712534 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerStarted","Data":"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923"} Nov 23 07:18:54 crc kubenswrapper[4988]: I1123 07:18:54.738535 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9k2n5" podStartSLOduration=2.200765184 podStartE2EDuration="4.738514917s" podCreationTimestamp="2025-11-23 07:18:50 +0000 UTC" firstStartedPulling="2025-11-23 07:18:51.682708887 +0000 UTC m=+1983.991221660" lastFinishedPulling="2025-11-23 07:18:54.22045859 +0000 UTC m=+1986.528971393" observedRunningTime="2025-11-23 07:18:54.735053693 +0000 UTC m=+1987.043566476" watchObservedRunningTime="2025-11-23 07:18:54.738514917 +0000 UTC m=+1987.047027690" Nov 23 07:19:01 crc kubenswrapper[4988]: I1123 07:19:01.029280 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:01 crc kubenswrapper[4988]: I1123 07:19:01.029703 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:02 crc kubenswrapper[4988]: I1123 07:19:02.094575 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9k2n5" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="registry-server" probeResult="failure" output=< Nov 23 07:19:02 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 07:19:02 crc kubenswrapper[4988]: > Nov 23 07:19:11 crc kubenswrapper[4988]: I1123 07:19:11.070357 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:11 crc kubenswrapper[4988]: I1123 07:19:11.136621 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:11 crc kubenswrapper[4988]: I1123 07:19:11.303451 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:19:12 crc kubenswrapper[4988]: I1123 07:19:12.881725 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9k2n5" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="registry-server" containerID="cri-o://c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923" gracePeriod=2 Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.370289 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.489447 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content\") pod \"e340b553-a369-4526-83c9-dbfacdd7e928\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.489570 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7m2q\" (UniqueName: \"kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q\") pod \"e340b553-a369-4526-83c9-dbfacdd7e928\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.489777 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities\") pod \"e340b553-a369-4526-83c9-dbfacdd7e928\" (UID: \"e340b553-a369-4526-83c9-dbfacdd7e928\") " Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.490627 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities" (OuterVolumeSpecName: "utilities") pod "e340b553-a369-4526-83c9-dbfacdd7e928" (UID: "e340b553-a369-4526-83c9-dbfacdd7e928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.498682 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q" (OuterVolumeSpecName: "kube-api-access-d7m2q") pod "e340b553-a369-4526-83c9-dbfacdd7e928" (UID: "e340b553-a369-4526-83c9-dbfacdd7e928"). InnerVolumeSpecName "kube-api-access-d7m2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.592586 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7m2q\" (UniqueName: \"kubernetes.io/projected/e340b553-a369-4526-83c9-dbfacdd7e928-kube-api-access-d7m2q\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.592636 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.630140 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e340b553-a369-4526-83c9-dbfacdd7e928" (UID: "e340b553-a369-4526-83c9-dbfacdd7e928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.693555 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e340b553-a369-4526-83c9-dbfacdd7e928-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.898554 4988 generic.go:334] "Generic (PLEG): container finished" podID="e340b553-a369-4526-83c9-dbfacdd7e928" containerID="c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923" exitCode=0 Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.898619 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerDied","Data":"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923"} Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.898679 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k2n5" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.900091 4988 scope.go:117] "RemoveContainer" containerID="c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.899935 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k2n5" event={"ID":"e340b553-a369-4526-83c9-dbfacdd7e928","Type":"ContainerDied","Data":"de60277c5061d6b23568b15d257fe88dcb8ae3051a1a25248f621a5fa025213f"} Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.933478 4988 scope.go:117] "RemoveContainer" containerID="54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2" Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.958159 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.967783 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9k2n5"] Nov 23 07:19:13 crc kubenswrapper[4988]: I1123 07:19:13.981993 4988 scope.go:117] "RemoveContainer" containerID="57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.012127 4988 scope.go:117] "RemoveContainer" containerID="c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923" Nov 23 07:19:14 crc kubenswrapper[4988]: E1123 07:19:14.012856 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923\": container with ID starting with c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923 not found: ID does not exist" containerID="c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.012947 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923"} err="failed to get container status \"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923\": rpc error: code = NotFound desc = could not find container \"c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923\": container with ID starting with c56fcae748ba22f35bbb3b8151333e1f0b8f072dc746e19135c71c29f318e923 not found: ID does not exist" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.013003 4988 scope.go:117] "RemoveContainer" containerID="54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2" Nov 23 07:19:14 crc kubenswrapper[4988]: E1123 07:19:14.013692 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2\": container with ID starting with 54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2 not found: ID does not exist" containerID="54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.013761 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2"} err="failed to get container status \"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2\": rpc error: code = NotFound desc = could not find container \"54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2\": container with ID starting with 54aff3cc17b4ad8e89e12e25203846f21937ba3c6ca2ee54f9cc5dce3bbc15a2 not found: ID does not exist" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.013801 4988 scope.go:117] "RemoveContainer" containerID="57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e" Nov 23 07:19:14 crc kubenswrapper[4988]: E1123 07:19:14.014285 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e\": container with ID starting with 57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e not found: ID does not exist" containerID="57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.014334 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e"} err="failed to get container status \"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e\": rpc error: code = NotFound desc = could not find container \"57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e\": container with ID starting with 57cec5bca884796b22e36c755450bc54b63903c653a17203181ea4433eb8632e not found: ID does not exist" Nov 23 07:19:14 crc kubenswrapper[4988]: I1123 07:19:14.514138 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" path="/var/lib/kubelet/pods/e340b553-a369-4526-83c9-dbfacdd7e928/volumes" Nov 23 07:19:21 crc kubenswrapper[4988]: I1123 07:19:21.672430 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:19:21 crc kubenswrapper[4988]: I1123 07:19:21.673072 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:19:51 crc kubenswrapper[4988]: I1123 07:19:51.672896 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:19:51 crc kubenswrapper[4988]: I1123 07:19:51.673654 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.743798 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:15 crc kubenswrapper[4988]: E1123 07:20:15.744921 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="extract-utilities" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.744943 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="extract-utilities" Nov 23 07:20:15 crc kubenswrapper[4988]: E1123 07:20:15.744968 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="extract-content" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.744979 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="extract-content" Nov 23 07:20:15 crc kubenswrapper[4988]: E1123 07:20:15.745025 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="registry-server" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.745036 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="registry-server" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.745350 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e340b553-a369-4526-83c9-dbfacdd7e928" containerName="registry-server" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.746908 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.764962 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.939299 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcz7s\" (UniqueName: \"kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.939356 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:15 crc kubenswrapper[4988]: I1123 07:20:15.939666 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.040812 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcz7s\" (UniqueName: \"kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.040862 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.040902 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.041432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.041463 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.062274 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcz7s\" (UniqueName: \"kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s\") pod \"redhat-marketplace-pvk7n\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.075957 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.514079 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:16 crc kubenswrapper[4988]: I1123 07:20:16.543438 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerStarted","Data":"cacec47d8dee642de6bfc86d3c27fa72fccb7854dbc24aeee5831f8868a184d1"} Nov 23 07:20:17 crc kubenswrapper[4988]: I1123 07:20:17.557639 4988 generic.go:334] "Generic (PLEG): container finished" podID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerID="100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25" exitCode=0 Nov 23 07:20:17 crc kubenswrapper[4988]: I1123 07:20:17.557720 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerDied","Data":"100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25"} Nov 23 07:20:18 crc kubenswrapper[4988]: I1123 07:20:18.570941 4988 generic.go:334] "Generic (PLEG): container finished" podID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerID="65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1" exitCode=0 Nov 23 07:20:18 crc kubenswrapper[4988]: I1123 07:20:18.571050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerDied","Data":"65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1"} Nov 23 07:20:19 crc kubenswrapper[4988]: I1123 07:20:19.582234 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerStarted","Data":"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73"} Nov 23 07:20:19 crc kubenswrapper[4988]: I1123 07:20:19.615340 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvk7n" podStartSLOduration=3.142383243 podStartE2EDuration="4.615319565s" podCreationTimestamp="2025-11-23 07:20:15 +0000 UTC" firstStartedPulling="2025-11-23 07:20:17.559871733 +0000 UTC m=+2069.868384526" lastFinishedPulling="2025-11-23 07:20:19.032808055 +0000 UTC m=+2071.341320848" observedRunningTime="2025-11-23 07:20:19.606741307 +0000 UTC m=+2071.915254140" watchObservedRunningTime="2025-11-23 07:20:19.615319565 +0000 UTC m=+2071.923832328" Nov 23 07:20:21 crc kubenswrapper[4988]: I1123 07:20:21.672008 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:20:21 crc kubenswrapper[4988]: I1123 07:20:21.673341 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:20:21 crc kubenswrapper[4988]: I1123 07:20:21.673425 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:20:21 crc kubenswrapper[4988]: I1123 07:20:21.674363 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:20:21 crc kubenswrapper[4988]: I1123 07:20:21.674475 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10" gracePeriod=600 Nov 23 07:20:22 crc kubenswrapper[4988]: I1123 07:20:22.611591 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10" exitCode=0 Nov 23 07:20:22 crc kubenswrapper[4988]: I1123 07:20:22.611704 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10"} Nov 23 07:20:22 crc kubenswrapper[4988]: I1123 07:20:22.612036 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028"} Nov 23 07:20:22 crc kubenswrapper[4988]: I1123 07:20:22.612062 4988 scope.go:117] "RemoveContainer" containerID="f2d99d936da8b93386619a1e2cbc6370a2876fb81956b641f8b2dddd37948867" Nov 23 07:20:26 crc kubenswrapper[4988]: I1123 07:20:26.076544 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:26 crc kubenswrapper[4988]: I1123 07:20:26.077415 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:26 crc kubenswrapper[4988]: I1123 07:20:26.131156 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:26 crc kubenswrapper[4988]: I1123 07:20:26.719280 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:26 crc kubenswrapper[4988]: I1123 07:20:26.778050 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:28 crc kubenswrapper[4988]: I1123 07:20:28.675066 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvk7n" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="registry-server" containerID="cri-o://0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73" gracePeriod=2 Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.133177 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.270530 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content\") pod \"286b2cfa-dd61-4a63-a065-f3e37457aba1\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.270655 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities\") pod \"286b2cfa-dd61-4a63-a065-f3e37457aba1\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.270716 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcz7s\" (UniqueName: \"kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s\") pod \"286b2cfa-dd61-4a63-a065-f3e37457aba1\" (UID: \"286b2cfa-dd61-4a63-a065-f3e37457aba1\") " Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.272337 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities" (OuterVolumeSpecName: "utilities") pod "286b2cfa-dd61-4a63-a065-f3e37457aba1" (UID: "286b2cfa-dd61-4a63-a065-f3e37457aba1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.279864 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s" (OuterVolumeSpecName: "kube-api-access-mcz7s") pod "286b2cfa-dd61-4a63-a065-f3e37457aba1" (UID: "286b2cfa-dd61-4a63-a065-f3e37457aba1"). InnerVolumeSpecName "kube-api-access-mcz7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.308360 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "286b2cfa-dd61-4a63-a065-f3e37457aba1" (UID: "286b2cfa-dd61-4a63-a065-f3e37457aba1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.372821 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.372866 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286b2cfa-dd61-4a63-a065-f3e37457aba1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.372887 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcz7s\" (UniqueName: \"kubernetes.io/projected/286b2cfa-dd61-4a63-a065-f3e37457aba1-kube-api-access-mcz7s\") on node \"crc\" DevicePath \"\"" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.686989 4988 generic.go:334] "Generic (PLEG): container finished" podID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerID="0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73" exitCode=0 Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.687064 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvk7n" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.687064 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerDied","Data":"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73"} Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.687296 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvk7n" event={"ID":"286b2cfa-dd61-4a63-a065-f3e37457aba1","Type":"ContainerDied","Data":"cacec47d8dee642de6bfc86d3c27fa72fccb7854dbc24aeee5831f8868a184d1"} Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.687361 4988 scope.go:117] "RemoveContainer" containerID="0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.713514 4988 scope.go:117] "RemoveContainer" containerID="65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.735319 4988 scope.go:117] "RemoveContainer" containerID="100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.794296 4988 scope.go:117] "RemoveContainer" containerID="0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73" Nov 23 07:20:29 crc kubenswrapper[4988]: E1123 07:20:29.795384 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73\": container with ID starting with 0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73 not found: ID does not exist" containerID="0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.795440 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73"} err="failed to get container status \"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73\": rpc error: code = NotFound desc = could not find container \"0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73\": container with ID starting with 0d41edbc32ad2b34b1079001b8331513c0fbbbe92a8e5056667e273995b74c73 not found: ID does not exist" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.795470 4988 scope.go:117] "RemoveContainer" containerID="65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1" Nov 23 07:20:29 crc kubenswrapper[4988]: E1123 07:20:29.796040 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1\": container with ID starting with 65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1 not found: ID does not exist" containerID="65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.796116 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1"} err="failed to get container status \"65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1\": rpc error: code = NotFound desc = could not find container \"65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1\": container with ID starting with 65eba914ad07c1145fad035726c4dc8c11e3191cdf1addca015e4ec26a219ac1 not found: ID does not exist" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.796152 4988 scope.go:117] "RemoveContainer" containerID="100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25" Nov 23 07:20:29 crc kubenswrapper[4988]: E1123 07:20:29.796653 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25\": container with ID starting with 100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25 not found: ID does not exist" containerID="100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.796729 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25"} err="failed to get container status \"100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25\": rpc error: code = NotFound desc = could not find container \"100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25\": container with ID starting with 100258270a43f8354783349091fe6e77e916ad44504ba20555d99da475b69b25 not found: ID does not exist" Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.803185 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:29 crc kubenswrapper[4988]: I1123 07:20:29.810285 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvk7n"] Nov 23 07:20:30 crc kubenswrapper[4988]: I1123 07:20:30.505355 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" path="/var/lib/kubelet/pods/286b2cfa-dd61-4a63-a065-f3e37457aba1/volumes" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.541815 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:41 crc kubenswrapper[4988]: E1123 07:21:41.542803 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="extract-content" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.542821 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="extract-content" Nov 23 07:21:41 crc kubenswrapper[4988]: E1123 07:21:41.542858 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="registry-server" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.542871 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="registry-server" Nov 23 07:21:41 crc kubenswrapper[4988]: E1123 07:21:41.542892 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="extract-utilities" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.542900 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="extract-utilities" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.543082 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="286b2cfa-dd61-4a63-a065-f3e37457aba1" containerName="registry-server" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.544439 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.561358 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.665116 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.665510 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.665660 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77qf\" (UniqueName: \"kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.766510 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f77qf\" (UniqueName: \"kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.766846 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.766964 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.767320 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.767574 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.797041 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77qf\" (UniqueName: \"kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf\") pod \"community-operators-9w8cd\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:41 crc kubenswrapper[4988]: I1123 07:21:41.899470 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:42 crc kubenswrapper[4988]: I1123 07:21:42.386840 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:43 crc kubenswrapper[4988]: I1123 07:21:43.399757 4988 generic.go:334] "Generic (PLEG): container finished" podID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerID="aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad" exitCode=0 Nov 23 07:21:43 crc kubenswrapper[4988]: I1123 07:21:43.399799 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerDied","Data":"aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad"} Nov 23 07:21:43 crc kubenswrapper[4988]: I1123 07:21:43.399825 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerStarted","Data":"29619c891bc9e9bad6640ab6094eeb339f93f73617be18d2163a5bec4d3a5371"} Nov 23 07:21:44 crc kubenswrapper[4988]: I1123 07:21:44.409324 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerStarted","Data":"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857"} Nov 23 07:21:45 crc kubenswrapper[4988]: I1123 07:21:45.421009 4988 generic.go:334] "Generic (PLEG): container finished" podID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerID="389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857" exitCode=0 Nov 23 07:21:45 crc kubenswrapper[4988]: I1123 07:21:45.421073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerDied","Data":"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857"} Nov 23 07:21:46 crc kubenswrapper[4988]: I1123 07:21:46.434021 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerStarted","Data":"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3"} Nov 23 07:21:46 crc kubenswrapper[4988]: I1123 07:21:46.457145 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9w8cd" podStartSLOduration=3.013719593 podStartE2EDuration="5.457112657s" podCreationTimestamp="2025-11-23 07:21:41 +0000 UTC" firstStartedPulling="2025-11-23 07:21:43.402510856 +0000 UTC m=+2155.711023659" lastFinishedPulling="2025-11-23 07:21:45.84590395 +0000 UTC m=+2158.154416723" observedRunningTime="2025-11-23 07:21:46.454968885 +0000 UTC m=+2158.763481658" watchObservedRunningTime="2025-11-23 07:21:46.457112657 +0000 UTC m=+2158.765625460" Nov 23 07:21:51 crc kubenswrapper[4988]: I1123 07:21:51.900331 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:51 crc kubenswrapper[4988]: I1123 07:21:51.900975 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:51 crc kubenswrapper[4988]: I1123 07:21:51.986721 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:52 crc kubenswrapper[4988]: I1123 07:21:52.533774 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:52 crc kubenswrapper[4988]: I1123 07:21:52.597152 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:54 crc kubenswrapper[4988]: I1123 07:21:54.510174 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9w8cd" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="registry-server" containerID="cri-o://985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3" gracePeriod=2 Nov 23 07:21:54 crc kubenswrapper[4988]: I1123 07:21:54.964627 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.067503 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities\") pod \"fee4939f-0f39-46c6-a451-0dd91de464ee\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.067710 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77qf\" (UniqueName: \"kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf\") pod \"fee4939f-0f39-46c6-a451-0dd91de464ee\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.067752 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content\") pod \"fee4939f-0f39-46c6-a451-0dd91de464ee\" (UID: \"fee4939f-0f39-46c6-a451-0dd91de464ee\") " Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.069535 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities" (OuterVolumeSpecName: "utilities") pod "fee4939f-0f39-46c6-a451-0dd91de464ee" (UID: "fee4939f-0f39-46c6-a451-0dd91de464ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.080592 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf" (OuterVolumeSpecName: "kube-api-access-f77qf") pod "fee4939f-0f39-46c6-a451-0dd91de464ee" (UID: "fee4939f-0f39-46c6-a451-0dd91de464ee"). InnerVolumeSpecName "kube-api-access-f77qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.169677 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f77qf\" (UniqueName: \"kubernetes.io/projected/fee4939f-0f39-46c6-a451-0dd91de464ee-kube-api-access-f77qf\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.170034 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.376550 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fee4939f-0f39-46c6-a451-0dd91de464ee" (UID: "fee4939f-0f39-46c6-a451-0dd91de464ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.474779 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee4939f-0f39-46c6-a451-0dd91de464ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.523948 4988 generic.go:334] "Generic (PLEG): container finished" podID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerID="985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3" exitCode=0 Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.524043 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerDied","Data":"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3"} Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.524070 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9w8cd" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.524584 4988 scope.go:117] "RemoveContainer" containerID="985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.524458 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9w8cd" event={"ID":"fee4939f-0f39-46c6-a451-0dd91de464ee","Type":"ContainerDied","Data":"29619c891bc9e9bad6640ab6094eeb339f93f73617be18d2163a5bec4d3a5371"} Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.555022 4988 scope.go:117] "RemoveContainer" containerID="389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.589920 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.601297 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9w8cd"] Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.606004 4988 scope.go:117] "RemoveContainer" containerID="aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.627618 4988 scope.go:117] "RemoveContainer" containerID="985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3" Nov 23 07:21:55 crc kubenswrapper[4988]: E1123 07:21:55.628175 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3\": container with ID starting with 985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3 not found: ID does not exist" containerID="985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.628261 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3"} err="failed to get container status \"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3\": rpc error: code = NotFound desc = could not find container \"985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3\": container with ID starting with 985e9da164662e45cf6f4508044bcbb5250a2ba161866b423ab2926599ca37e3 not found: ID does not exist" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.628287 4988 scope.go:117] "RemoveContainer" containerID="389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857" Nov 23 07:21:55 crc kubenswrapper[4988]: E1123 07:21:55.628672 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857\": container with ID starting with 389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857 not found: ID does not exist" containerID="389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.628697 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857"} err="failed to get container status \"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857\": rpc error: code = NotFound desc = could not find container \"389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857\": container with ID starting with 389e46b182612fae06e4ed128f8d50a99bfb0e6a45978f032a59f9ea498df857 not found: ID does not exist" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.628709 4988 scope.go:117] "RemoveContainer" containerID="aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad" Nov 23 07:21:55 crc kubenswrapper[4988]: E1123 07:21:55.629044 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad\": container with ID starting with aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad not found: ID does not exist" containerID="aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad" Nov 23 07:21:55 crc kubenswrapper[4988]: I1123 07:21:55.629097 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad"} err="failed to get container status \"aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad\": rpc error: code = NotFound desc = could not find container \"aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad\": container with ID starting with aa985c8575b333984ee66f0609ba9c72d6b020748f230da55d2a4d36a278b1ad not found: ID does not exist" Nov 23 07:21:56 crc kubenswrapper[4988]: I1123 07:21:56.513297 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" path="/var/lib/kubelet/pods/fee4939f-0f39-46c6-a451-0dd91de464ee/volumes" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.068034 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:09 crc kubenswrapper[4988]: E1123 07:22:09.069758 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="extract-utilities" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.069786 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="extract-utilities" Nov 23 07:22:09 crc kubenswrapper[4988]: E1123 07:22:09.069808 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="extract-content" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.069819 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="extract-content" Nov 23 07:22:09 crc kubenswrapper[4988]: E1123 07:22:09.069846 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="registry-server" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.069858 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="registry-server" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.070097 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee4939f-0f39-46c6-a451-0dd91de464ee" containerName="registry-server" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.071800 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.090151 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.185884 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.185939 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.185974 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dpw\" (UniqueName: \"kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.287473 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.287531 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.287577 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dpw\" (UniqueName: \"kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.288155 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.288410 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.312343 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dpw\" (UniqueName: \"kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw\") pod \"certified-operators-rm7pp\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.390676 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:09 crc kubenswrapper[4988]: I1123 07:22:09.854142 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:10 crc kubenswrapper[4988]: I1123 07:22:10.659438 4988 generic.go:334] "Generic (PLEG): container finished" podID="98325823-08a0-434c-be88-c18340835ce1" containerID="78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667" exitCode=0 Nov 23 07:22:10 crc kubenswrapper[4988]: I1123 07:22:10.659569 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerDied","Data":"78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667"} Nov 23 07:22:10 crc kubenswrapper[4988]: I1123 07:22:10.659867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerStarted","Data":"d45f46d488a831df217d0c8000154828cd21d8749eee3ce822802a457bb38c5e"} Nov 23 07:22:11 crc kubenswrapper[4988]: I1123 07:22:11.674256 4988 generic.go:334] "Generic (PLEG): container finished" podID="98325823-08a0-434c-be88-c18340835ce1" containerID="1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e" exitCode=0 Nov 23 07:22:11 crc kubenswrapper[4988]: I1123 07:22:11.674374 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerDied","Data":"1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e"} Nov 23 07:22:12 crc kubenswrapper[4988]: I1123 07:22:12.687072 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerStarted","Data":"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd"} Nov 23 07:22:12 crc kubenswrapper[4988]: I1123 07:22:12.718588 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rm7pp" podStartSLOduration=2.294757266 podStartE2EDuration="3.718562478s" podCreationTimestamp="2025-11-23 07:22:09 +0000 UTC" firstStartedPulling="2025-11-23 07:22:10.661961944 +0000 UTC m=+2182.970474747" lastFinishedPulling="2025-11-23 07:22:12.085767196 +0000 UTC m=+2184.394279959" observedRunningTime="2025-11-23 07:22:12.712514841 +0000 UTC m=+2185.021027614" watchObservedRunningTime="2025-11-23 07:22:12.718562478 +0000 UTC m=+2185.027075271" Nov 23 07:22:19 crc kubenswrapper[4988]: I1123 07:22:19.390913 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:19 crc kubenswrapper[4988]: I1123 07:22:19.391312 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:19 crc kubenswrapper[4988]: I1123 07:22:19.456144 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:19 crc kubenswrapper[4988]: I1123 07:22:19.806938 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:19 crc kubenswrapper[4988]: I1123 07:22:19.855297 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:21 crc kubenswrapper[4988]: I1123 07:22:21.781911 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rm7pp" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="registry-server" containerID="cri-o://b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd" gracePeriod=2 Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.319312 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.517020 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities\") pod \"98325823-08a0-434c-be88-c18340835ce1\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.517090 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content\") pod \"98325823-08a0-434c-be88-c18340835ce1\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.517184 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2dpw\" (UniqueName: \"kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw\") pod \"98325823-08a0-434c-be88-c18340835ce1\" (UID: \"98325823-08a0-434c-be88-c18340835ce1\") " Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.519129 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities" (OuterVolumeSpecName: "utilities") pod "98325823-08a0-434c-be88-c18340835ce1" (UID: "98325823-08a0-434c-be88-c18340835ce1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.525622 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw" (OuterVolumeSpecName: "kube-api-access-q2dpw") pod "98325823-08a0-434c-be88-c18340835ce1" (UID: "98325823-08a0-434c-be88-c18340835ce1"). InnerVolumeSpecName "kube-api-access-q2dpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.562017 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98325823-08a0-434c-be88-c18340835ce1" (UID: "98325823-08a0-434c-be88-c18340835ce1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.619225 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.619283 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98325823-08a0-434c-be88-c18340835ce1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.619314 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2dpw\" (UniqueName: \"kubernetes.io/projected/98325823-08a0-434c-be88-c18340835ce1-kube-api-access-q2dpw\") on node \"crc\" DevicePath \"\"" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.794960 4988 generic.go:334] "Generic (PLEG): container finished" podID="98325823-08a0-434c-be88-c18340835ce1" containerID="b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd" exitCode=0 Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.795025 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerDied","Data":"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd"} Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.795066 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rm7pp" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.795090 4988 scope.go:117] "RemoveContainer" containerID="b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.795069 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rm7pp" event={"ID":"98325823-08a0-434c-be88-c18340835ce1","Type":"ContainerDied","Data":"d45f46d488a831df217d0c8000154828cd21d8749eee3ce822802a457bb38c5e"} Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.850874 4988 scope.go:117] "RemoveContainer" containerID="1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.853937 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.859501 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rm7pp"] Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.881405 4988 scope.go:117] "RemoveContainer" containerID="78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.927728 4988 scope.go:117] "RemoveContainer" containerID="b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd" Nov 23 07:22:22 crc kubenswrapper[4988]: E1123 07:22:22.928151 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd\": container with ID starting with b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd not found: ID does not exist" containerID="b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.928216 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd"} err="failed to get container status \"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd\": rpc error: code = NotFound desc = could not find container \"b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd\": container with ID starting with b3e2c0c2ea5e8dd67fbeda4390af7f012628dffdf7b6a201f268642f604f19dd not found: ID does not exist" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.928246 4988 scope.go:117] "RemoveContainer" containerID="1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e" Nov 23 07:22:22 crc kubenswrapper[4988]: E1123 07:22:22.928807 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e\": container with ID starting with 1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e not found: ID does not exist" containerID="1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.928853 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e"} err="failed to get container status \"1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e\": rpc error: code = NotFound desc = could not find container \"1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e\": container with ID starting with 1aa8fea09508c27c510ed875d363b6e65d41cd98a09f9da16a46aa6bf1c1e73e not found: ID does not exist" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.928884 4988 scope.go:117] "RemoveContainer" containerID="78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667" Nov 23 07:22:22 crc kubenswrapper[4988]: E1123 07:22:22.929274 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667\": container with ID starting with 78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667 not found: ID does not exist" containerID="78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667" Nov 23 07:22:22 crc kubenswrapper[4988]: I1123 07:22:22.929308 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667"} err="failed to get container status \"78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667\": rpc error: code = NotFound desc = could not find container \"78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667\": container with ID starting with 78a7b5eeb6edb28ab5b10cde2ae5421ae8c33560bf762a9fabc6135d91c12667 not found: ID does not exist" Nov 23 07:22:24 crc kubenswrapper[4988]: I1123 07:22:24.514674 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98325823-08a0-434c-be88-c18340835ce1" path="/var/lib/kubelet/pods/98325823-08a0-434c-be88-c18340835ce1/volumes" Nov 23 07:22:51 crc kubenswrapper[4988]: I1123 07:22:51.672392 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:22:51 crc kubenswrapper[4988]: I1123 07:22:51.672912 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:23:21 crc kubenswrapper[4988]: I1123 07:23:21.671660 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:23:21 crc kubenswrapper[4988]: I1123 07:23:21.673321 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:23:51 crc kubenswrapper[4988]: I1123 07:23:51.672859 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:23:51 crc kubenswrapper[4988]: I1123 07:23:51.673560 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:23:51 crc kubenswrapper[4988]: I1123 07:23:51.673628 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:23:51 crc kubenswrapper[4988]: I1123 07:23:51.674382 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:23:51 crc kubenswrapper[4988]: I1123 07:23:51.674479 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" gracePeriod=600 Nov 23 07:23:51 crc kubenswrapper[4988]: E1123 07:23:51.806723 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:23:52 crc kubenswrapper[4988]: I1123 07:23:52.702021 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" exitCode=0 Nov 23 07:23:52 crc kubenswrapper[4988]: I1123 07:23:52.702103 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028"} Nov 23 07:23:52 crc kubenswrapper[4988]: I1123 07:23:52.702291 4988 scope.go:117] "RemoveContainer" containerID="304bb6774a2d8ba34b902ca894c59a77abb1e2679cfbfc985da958dba044bc10" Nov 23 07:23:52 crc kubenswrapper[4988]: I1123 07:23:52.703006 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:23:52 crc kubenswrapper[4988]: E1123 07:23:52.703503 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:24:04 crc kubenswrapper[4988]: I1123 07:24:04.496604 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:24:04 crc kubenswrapper[4988]: E1123 07:24:04.497631 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:24:16 crc kubenswrapper[4988]: I1123 07:24:16.496816 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:24:16 crc kubenswrapper[4988]: E1123 07:24:16.497857 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:24:28 crc kubenswrapper[4988]: I1123 07:24:28.503393 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:24:28 crc kubenswrapper[4988]: E1123 07:24:28.504593 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:24:40 crc kubenswrapper[4988]: I1123 07:24:40.496680 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:24:40 crc kubenswrapper[4988]: E1123 07:24:40.497683 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:24:51 crc kubenswrapper[4988]: I1123 07:24:51.496905 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:24:51 crc kubenswrapper[4988]: E1123 07:24:51.498080 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:25:05 crc kubenswrapper[4988]: I1123 07:25:05.495959 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:25:05 crc kubenswrapper[4988]: E1123 07:25:05.496895 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:25:17 crc kubenswrapper[4988]: I1123 07:25:17.495930 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:25:17 crc kubenswrapper[4988]: E1123 07:25:17.496805 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:25:28 crc kubenswrapper[4988]: I1123 07:25:28.505360 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:25:28 crc kubenswrapper[4988]: E1123 07:25:28.508266 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:25:42 crc kubenswrapper[4988]: I1123 07:25:42.495759 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:25:42 crc kubenswrapper[4988]: E1123 07:25:42.496616 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:25:57 crc kubenswrapper[4988]: I1123 07:25:57.497588 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:25:57 crc kubenswrapper[4988]: E1123 07:25:57.499423 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:26:08 crc kubenswrapper[4988]: I1123 07:26:08.503740 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:26:08 crc kubenswrapper[4988]: E1123 07:26:08.505185 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:26:21 crc kubenswrapper[4988]: I1123 07:26:21.496438 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:26:21 crc kubenswrapper[4988]: E1123 07:26:21.497744 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:26:32 crc kubenswrapper[4988]: I1123 07:26:32.496961 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:26:32 crc kubenswrapper[4988]: E1123 07:26:32.498158 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:26:44 crc kubenswrapper[4988]: I1123 07:26:44.496765 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:26:44 crc kubenswrapper[4988]: E1123 07:26:44.497923 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:26:56 crc kubenswrapper[4988]: I1123 07:26:56.496570 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:26:56 crc kubenswrapper[4988]: E1123 07:26:56.497666 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:27:09 crc kubenswrapper[4988]: I1123 07:27:09.496419 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:27:09 crc kubenswrapper[4988]: E1123 07:27:09.497225 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:27:21 crc kubenswrapper[4988]: I1123 07:27:21.496516 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:27:21 crc kubenswrapper[4988]: E1123 07:27:21.498999 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:27:33 crc kubenswrapper[4988]: I1123 07:27:33.496707 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:27:33 crc kubenswrapper[4988]: E1123 07:27:33.498368 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:27:45 crc kubenswrapper[4988]: I1123 07:27:45.496185 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:27:45 crc kubenswrapper[4988]: E1123 07:27:45.497158 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:27:57 crc kubenswrapper[4988]: I1123 07:27:57.495930 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:27:57 crc kubenswrapper[4988]: E1123 07:27:57.496830 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:28:12 crc kubenswrapper[4988]: I1123 07:28:12.496242 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:28:12 crc kubenswrapper[4988]: E1123 07:28:12.497133 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:28:27 crc kubenswrapper[4988]: I1123 07:28:27.497279 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:28:27 crc kubenswrapper[4988]: E1123 07:28:27.497744 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:28:38 crc kubenswrapper[4988]: I1123 07:28:38.504464 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:28:38 crc kubenswrapper[4988]: E1123 07:28:38.505710 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:28:50 crc kubenswrapper[4988]: I1123 07:28:50.496077 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:28:50 crc kubenswrapper[4988]: E1123 07:28:50.497254 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:29:04 crc kubenswrapper[4988]: I1123 07:29:04.496047 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:29:05 crc kubenswrapper[4988]: I1123 07:29:05.008123 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c"} Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.547919 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:17 crc kubenswrapper[4988]: E1123 07:29:17.549295 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="extract-utilities" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.549324 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="extract-utilities" Nov 23 07:29:17 crc kubenswrapper[4988]: E1123 07:29:17.549360 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="registry-server" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.549375 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="registry-server" Nov 23 07:29:17 crc kubenswrapper[4988]: E1123 07:29:17.549408 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="extract-content" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.549421 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="extract-content" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.549728 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="98325823-08a0-434c-be88-c18340835ce1" containerName="registry-server" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.551272 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.563067 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.706593 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrn5\" (UniqueName: \"kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.707085 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.707551 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.808817 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.808898 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrn5\" (UniqueName: \"kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.808947 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.809561 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.809954 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.843601 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrn5\" (UniqueName: \"kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5\") pod \"redhat-operators-6vtpg\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:17 crc kubenswrapper[4988]: I1123 07:29:17.929595 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:18 crc kubenswrapper[4988]: I1123 07:29:18.365830 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:18 crc kubenswrapper[4988]: W1123 07:29:18.377344 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9675bd90_bbc5_47dd_a433_fb7ad41cc4a1.slice/crio-d626f631bb0577ecf5701cc60e9afcd1a6d1430279c42de59df3741f4407d75b WatchSource:0}: Error finding container d626f631bb0577ecf5701cc60e9afcd1a6d1430279c42de59df3741f4407d75b: Status 404 returned error can't find the container with id d626f631bb0577ecf5701cc60e9afcd1a6d1430279c42de59df3741f4407d75b Nov 23 07:29:19 crc kubenswrapper[4988]: I1123 07:29:19.136475 4988 generic.go:334] "Generic (PLEG): container finished" podID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerID="0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1" exitCode=0 Nov 23 07:29:19 crc kubenswrapper[4988]: I1123 07:29:19.136555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerDied","Data":"0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1"} Nov 23 07:29:19 crc kubenswrapper[4988]: I1123 07:29:19.136599 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerStarted","Data":"d626f631bb0577ecf5701cc60e9afcd1a6d1430279c42de59df3741f4407d75b"} Nov 23 07:29:19 crc kubenswrapper[4988]: I1123 07:29:19.139738 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:29:20 crc kubenswrapper[4988]: I1123 07:29:20.149489 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerStarted","Data":"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732"} Nov 23 07:29:21 crc kubenswrapper[4988]: I1123 07:29:21.161825 4988 generic.go:334] "Generic (PLEG): container finished" podID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerID="a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732" exitCode=0 Nov 23 07:29:21 crc kubenswrapper[4988]: I1123 07:29:21.161891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerDied","Data":"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732"} Nov 23 07:29:22 crc kubenswrapper[4988]: I1123 07:29:22.173879 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerStarted","Data":"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e"} Nov 23 07:29:22 crc kubenswrapper[4988]: I1123 07:29:22.203842 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6vtpg" podStartSLOduration=2.788883296 podStartE2EDuration="5.203822155s" podCreationTimestamp="2025-11-23 07:29:17 +0000 UTC" firstStartedPulling="2025-11-23 07:29:19.139232295 +0000 UTC m=+2611.447745098" lastFinishedPulling="2025-11-23 07:29:21.554171154 +0000 UTC m=+2613.862683957" observedRunningTime="2025-11-23 07:29:22.200257169 +0000 UTC m=+2614.508769952" watchObservedRunningTime="2025-11-23 07:29:22.203822155 +0000 UTC m=+2614.512334938" Nov 23 07:29:27 crc kubenswrapper[4988]: I1123 07:29:27.930246 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:27 crc kubenswrapper[4988]: I1123 07:29:27.930907 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:28 crc kubenswrapper[4988]: I1123 07:29:28.973005 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6vtpg" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="registry-server" probeResult="failure" output=< Nov 23 07:29:28 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 07:29:28 crc kubenswrapper[4988]: > Nov 23 07:29:37 crc kubenswrapper[4988]: I1123 07:29:37.991229 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:38 crc kubenswrapper[4988]: I1123 07:29:38.069435 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:38 crc kubenswrapper[4988]: I1123 07:29:38.238643 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.320133 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6vtpg" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="registry-server" containerID="cri-o://6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e" gracePeriod=2 Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.833548 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.952694 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksrn5\" (UniqueName: \"kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5\") pod \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.952765 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities\") pod \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.952847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content\") pod \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\" (UID: \"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1\") " Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.954428 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities" (OuterVolumeSpecName: "utilities") pod "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" (UID: "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:29:39 crc kubenswrapper[4988]: I1123 07:29:39.964521 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5" (OuterVolumeSpecName: "kube-api-access-ksrn5") pod "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" (UID: "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1"). InnerVolumeSpecName "kube-api-access-ksrn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.054599 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksrn5\" (UniqueName: \"kubernetes.io/projected/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-kube-api-access-ksrn5\") on node \"crc\" DevicePath \"\"" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.054632 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.094591 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" (UID: "9675bd90-bbc5-47dd-a433-fb7ad41cc4a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.156129 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.327634 4988 generic.go:334] "Generic (PLEG): container finished" podID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerID="6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e" exitCode=0 Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.327680 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerDied","Data":"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e"} Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.327718 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6vtpg" event={"ID":"9675bd90-bbc5-47dd-a433-fb7ad41cc4a1","Type":"ContainerDied","Data":"d626f631bb0577ecf5701cc60e9afcd1a6d1430279c42de59df3741f4407d75b"} Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.327718 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6vtpg" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.327739 4988 scope.go:117] "RemoveContainer" containerID="6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.350782 4988 scope.go:117] "RemoveContainer" containerID="a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.364415 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.371280 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6vtpg"] Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.393682 4988 scope.go:117] "RemoveContainer" containerID="0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.412928 4988 scope.go:117] "RemoveContainer" containerID="6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e" Nov 23 07:29:40 crc kubenswrapper[4988]: E1123 07:29:40.413378 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e\": container with ID starting with 6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e not found: ID does not exist" containerID="6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.413419 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e"} err="failed to get container status \"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e\": rpc error: code = NotFound desc = could not find container \"6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e\": container with ID starting with 6f3357c2f3dbaeab19ac7ec5a43a0396075640717a5f7f225e71485f69283e0e not found: ID does not exist" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.413448 4988 scope.go:117] "RemoveContainer" containerID="a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732" Nov 23 07:29:40 crc kubenswrapper[4988]: E1123 07:29:40.413754 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732\": container with ID starting with a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732 not found: ID does not exist" containerID="a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.413784 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732"} err="failed to get container status \"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732\": rpc error: code = NotFound desc = could not find container \"a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732\": container with ID starting with a6b3b2bcf407728c1587fc8ceffde67f03bdacd190201faf1cd04fffc2a60732 not found: ID does not exist" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.413802 4988 scope.go:117] "RemoveContainer" containerID="0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1" Nov 23 07:29:40 crc kubenswrapper[4988]: E1123 07:29:40.414052 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1\": container with ID starting with 0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1 not found: ID does not exist" containerID="0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.414074 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1"} err="failed to get container status \"0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1\": rpc error: code = NotFound desc = could not find container \"0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1\": container with ID starting with 0db93317ce0706cc95b388acdfdbb8ce3d5e4042233baec85454c7720c2511a1 not found: ID does not exist" Nov 23 07:29:40 crc kubenswrapper[4988]: E1123 07:29:40.479362 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9675bd90_bbc5_47dd_a433_fb7ad41cc4a1.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:29:40 crc kubenswrapper[4988]: I1123 07:29:40.505022 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" path="/var/lib/kubelet/pods/9675bd90-bbc5-47dd-a433-fb7ad41cc4a1/volumes" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.172873 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46"] Nov 23 07:30:00 crc kubenswrapper[4988]: E1123 07:30:00.174356 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.174393 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[4988]: E1123 07:30:00.174454 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="extract-content" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.174473 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="extract-content" Nov 23 07:30:00 crc kubenswrapper[4988]: E1123 07:30:00.174526 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="extract-utilities" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.174544 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="extract-utilities" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.174898 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9675bd90-bbc5-47dd-a433-fb7ad41cc4a1" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.176017 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.179609 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.182678 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.188029 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46"] Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.279472 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.279532 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.279626 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrdf\" (UniqueName: \"kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.380694 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.380741 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.380802 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrdf\" (UniqueName: \"kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.381679 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.386903 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.399280 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrdf\" (UniqueName: \"kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf\") pod \"collect-profiles-29398050-g6d46\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.508117 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:00 crc kubenswrapper[4988]: I1123 07:30:00.965732 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46"] Nov 23 07:30:01 crc kubenswrapper[4988]: I1123 07:30:01.516669 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a307d56-a956-4aae-84ad-49f0559c6252" containerID="dee459d551881e51682810fe037fa610da2348033e95a2aa4b0379c69616100a" exitCode=0 Nov 23 07:30:01 crc kubenswrapper[4988]: I1123 07:30:01.516754 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" event={"ID":"4a307d56-a956-4aae-84ad-49f0559c6252","Type":"ContainerDied","Data":"dee459d551881e51682810fe037fa610da2348033e95a2aa4b0379c69616100a"} Nov 23 07:30:01 crc kubenswrapper[4988]: I1123 07:30:01.517042 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" event={"ID":"4a307d56-a956-4aae-84ad-49f0559c6252","Type":"ContainerStarted","Data":"26ffd62f4601e9126eea2bbe0a469a9bdaf9dacfd71b2f4354a309eda75042c9"} Nov 23 07:30:02 crc kubenswrapper[4988]: I1123 07:30:02.970994 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.140395 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrdf\" (UniqueName: \"kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf\") pod \"4a307d56-a956-4aae-84ad-49f0559c6252\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.140677 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume\") pod \"4a307d56-a956-4aae-84ad-49f0559c6252\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.140797 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume\") pod \"4a307d56-a956-4aae-84ad-49f0559c6252\" (UID: \"4a307d56-a956-4aae-84ad-49f0559c6252\") " Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.141866 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a307d56-a956-4aae-84ad-49f0559c6252" (UID: "4a307d56-a956-4aae-84ad-49f0559c6252"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.149371 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf" (OuterVolumeSpecName: "kube-api-access-wtrdf") pod "4a307d56-a956-4aae-84ad-49f0559c6252" (UID: "4a307d56-a956-4aae-84ad-49f0559c6252"). InnerVolumeSpecName "kube-api-access-wtrdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.150813 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a307d56-a956-4aae-84ad-49f0559c6252" (UID: "4a307d56-a956-4aae-84ad-49f0559c6252"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.242479 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrdf\" (UniqueName: \"kubernetes.io/projected/4a307d56-a956-4aae-84ad-49f0559c6252-kube-api-access-wtrdf\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.242517 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a307d56-a956-4aae-84ad-49f0559c6252-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.242532 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a307d56-a956-4aae-84ad-49f0559c6252-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.537178 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" event={"ID":"4a307d56-a956-4aae-84ad-49f0559c6252","Type":"ContainerDied","Data":"26ffd62f4601e9126eea2bbe0a469a9bdaf9dacfd71b2f4354a309eda75042c9"} Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.537305 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ffd62f4601e9126eea2bbe0a469a9bdaf9dacfd71b2f4354a309eda75042c9" Nov 23 07:30:03 crc kubenswrapper[4988]: I1123 07:30:03.537305 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46" Nov 23 07:30:04 crc kubenswrapper[4988]: I1123 07:30:04.061402 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp"] Nov 23 07:30:04 crc kubenswrapper[4988]: I1123 07:30:04.065642 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-2rwsp"] Nov 23 07:30:04 crc kubenswrapper[4988]: I1123 07:30:04.506047 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037989f7-21fc-4899-b869-b0ebfbe70cd6" path="/var/lib/kubelet/pods/037989f7-21fc-4899-b869-b0ebfbe70cd6/volumes" Nov 23 07:30:55 crc kubenswrapper[4988]: I1123 07:30:55.879808 4988 scope.go:117] "RemoveContainer" containerID="e88b8a74b1b5b0266ed221663b5d12f72bb9d0e2b403878d782ee60acfdf23dd" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.411927 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:03 crc kubenswrapper[4988]: E1123 07:31:03.413717 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a307d56-a956-4aae-84ad-49f0559c6252" containerName="collect-profiles" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.413751 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a307d56-a956-4aae-84ad-49f0559c6252" containerName="collect-profiles" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.414110 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a307d56-a956-4aae-84ad-49f0559c6252" containerName="collect-profiles" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.416935 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.432446 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.479620 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.479667 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.479691 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cr7g\" (UniqueName: \"kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.582012 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.581138 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.582168 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.582243 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cr7g\" (UniqueName: \"kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.582622 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.600420 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cr7g\" (UniqueName: \"kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g\") pod \"redhat-marketplace-vpcnb\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:03 crc kubenswrapper[4988]: I1123 07:31:03.769033 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:04 crc kubenswrapper[4988]: I1123 07:31:04.292309 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:05 crc kubenswrapper[4988]: I1123 07:31:05.158821 4988 generic.go:334] "Generic (PLEG): container finished" podID="0346ec45-5463-4301-8663-a4451adbe206" containerID="1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8" exitCode=0 Nov 23 07:31:05 crc kubenswrapper[4988]: I1123 07:31:05.158916 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerDied","Data":"1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8"} Nov 23 07:31:05 crc kubenswrapper[4988]: I1123 07:31:05.159254 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerStarted","Data":"ca40c9151c81067a308a17c073b1a4f9a85a862eeb4625669e7b96894d816ecd"} Nov 23 07:31:06 crc kubenswrapper[4988]: I1123 07:31:06.169080 4988 generic.go:334] "Generic (PLEG): container finished" podID="0346ec45-5463-4301-8663-a4451adbe206" containerID="2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5" exitCode=0 Nov 23 07:31:06 crc kubenswrapper[4988]: I1123 07:31:06.169214 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerDied","Data":"2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5"} Nov 23 07:31:07 crc kubenswrapper[4988]: I1123 07:31:07.177244 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerStarted","Data":"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8"} Nov 23 07:31:07 crc kubenswrapper[4988]: I1123 07:31:07.206044 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpcnb" podStartSLOduration=2.803945199 podStartE2EDuration="4.206025171s" podCreationTimestamp="2025-11-23 07:31:03 +0000 UTC" firstStartedPulling="2025-11-23 07:31:05.162082128 +0000 UTC m=+2717.470594911" lastFinishedPulling="2025-11-23 07:31:06.56416211 +0000 UTC m=+2718.872674883" observedRunningTime="2025-11-23 07:31:07.20269654 +0000 UTC m=+2719.511209303" watchObservedRunningTime="2025-11-23 07:31:07.206025171 +0000 UTC m=+2719.514537934" Nov 23 07:31:13 crc kubenswrapper[4988]: I1123 07:31:13.770085 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:13 crc kubenswrapper[4988]: I1123 07:31:13.770848 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:13 crc kubenswrapper[4988]: I1123 07:31:13.847560 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:14 crc kubenswrapper[4988]: I1123 07:31:14.306586 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:18 crc kubenswrapper[4988]: I1123 07:31:18.688285 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:18 crc kubenswrapper[4988]: I1123 07:31:18.688523 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpcnb" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="registry-server" containerID="cri-o://ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8" gracePeriod=2 Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.177973 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.281579 4988 generic.go:334] "Generic (PLEG): container finished" podID="0346ec45-5463-4301-8663-a4451adbe206" containerID="ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8" exitCode=0 Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.281638 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerDied","Data":"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8"} Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.281679 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpcnb" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.281700 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpcnb" event={"ID":"0346ec45-5463-4301-8663-a4451adbe206","Type":"ContainerDied","Data":"ca40c9151c81067a308a17c073b1a4f9a85a862eeb4625669e7b96894d816ecd"} Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.281724 4988 scope.go:117] "RemoveContainer" containerID="ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.311572 4988 scope.go:117] "RemoveContainer" containerID="2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.333046 4988 scope.go:117] "RemoveContainer" containerID="1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.333429 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cr7g\" (UniqueName: \"kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g\") pod \"0346ec45-5463-4301-8663-a4451adbe206\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.333519 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities\") pod \"0346ec45-5463-4301-8663-a4451adbe206\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.333627 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content\") pod \"0346ec45-5463-4301-8663-a4451adbe206\" (UID: \"0346ec45-5463-4301-8663-a4451adbe206\") " Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.336112 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities" (OuterVolumeSpecName: "utilities") pod "0346ec45-5463-4301-8663-a4451adbe206" (UID: "0346ec45-5463-4301-8663-a4451adbe206"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.339754 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g" (OuterVolumeSpecName: "kube-api-access-8cr7g") pod "0346ec45-5463-4301-8663-a4451adbe206" (UID: "0346ec45-5463-4301-8663-a4451adbe206"). InnerVolumeSpecName "kube-api-access-8cr7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.362329 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0346ec45-5463-4301-8663-a4451adbe206" (UID: "0346ec45-5463-4301-8663-a4451adbe206"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.395319 4988 scope.go:117] "RemoveContainer" containerID="ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8" Nov 23 07:31:19 crc kubenswrapper[4988]: E1123 07:31:19.395800 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8\": container with ID starting with ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8 not found: ID does not exist" containerID="ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.395844 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8"} err="failed to get container status \"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8\": rpc error: code = NotFound desc = could not find container \"ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8\": container with ID starting with ed0e54d548c866d0ad21fb9213399f7c5b97ad1874e8f7c72d46200d5ea807e8 not found: ID does not exist" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.395869 4988 scope.go:117] "RemoveContainer" containerID="2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5" Nov 23 07:31:19 crc kubenswrapper[4988]: E1123 07:31:19.396392 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5\": container with ID starting with 2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5 not found: ID does not exist" containerID="2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.396420 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5"} err="failed to get container status \"2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5\": rpc error: code = NotFound desc = could not find container \"2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5\": container with ID starting with 2dd745db7de649de0b654c85e41397544ecdcab5f39f696c75c446e1623cddf5 not found: ID does not exist" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.396435 4988 scope.go:117] "RemoveContainer" containerID="1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8" Nov 23 07:31:19 crc kubenswrapper[4988]: E1123 07:31:19.396747 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8\": container with ID starting with 1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8 not found: ID does not exist" containerID="1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.396773 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8"} err="failed to get container status \"1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8\": rpc error: code = NotFound desc = could not find container \"1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8\": container with ID starting with 1e89a5e035ede2ee452e158b42efd7a2482502a47b9d284d082f0ce37d14c4c8 not found: ID does not exist" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.436843 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cr7g\" (UniqueName: \"kubernetes.io/projected/0346ec45-5463-4301-8663-a4451adbe206-kube-api-access-8cr7g\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.436866 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.436875 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0346ec45-5463-4301-8663-a4451adbe206-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.647287 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:19 crc kubenswrapper[4988]: I1123 07:31:19.652825 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpcnb"] Nov 23 07:31:20 crc kubenswrapper[4988]: I1123 07:31:20.504899 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0346ec45-5463-4301-8663-a4451adbe206" path="/var/lib/kubelet/pods/0346ec45-5463-4301-8663-a4451adbe206/volumes" Nov 23 07:31:21 crc kubenswrapper[4988]: I1123 07:31:21.672546 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:31:21 crc kubenswrapper[4988]: I1123 07:31:21.673020 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:31:51 crc kubenswrapper[4988]: I1123 07:31:51.672084 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:31:51 crc kubenswrapper[4988]: I1123 07:31:51.673490 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.672642 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.674127 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.674243 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.674990 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.675060 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c" gracePeriod=600 Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.880678 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c" exitCode=0 Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.880939 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c"} Nov 23 07:32:21 crc kubenswrapper[4988]: I1123 07:32:21.881031 4988 scope.go:117] "RemoveContainer" containerID="d256ad96fbbdc55a7b72139123be90907bbab9b1b579598a295deb31a4a59028" Nov 23 07:32:22 crc kubenswrapper[4988]: I1123 07:32:22.893257 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e"} Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.478885 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:24 crc kubenswrapper[4988]: E1123 07:32:24.479826 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="registry-server" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.479885 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="registry-server" Nov 23 07:32:24 crc kubenswrapper[4988]: E1123 07:32:24.479906 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="extract-utilities" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.479914 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="extract-utilities" Nov 23 07:32:24 crc kubenswrapper[4988]: E1123 07:32:24.479939 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="extract-content" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.479946 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="extract-content" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.480140 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0346ec45-5463-4301-8663-a4451adbe206" containerName="registry-server" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.481903 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.494106 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.650583 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.650951 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7ckc\" (UniqueName: \"kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.650990 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.752384 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.752434 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7ckc\" (UniqueName: \"kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.752491 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.753059 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.753487 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.789620 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7ckc\" (UniqueName: \"kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc\") pod \"certified-operators-6fj9m\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:24 crc kubenswrapper[4988]: I1123 07:32:24.821289 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:25 crc kubenswrapper[4988]: I1123 07:32:25.318858 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:25 crc kubenswrapper[4988]: W1123 07:32:25.327152 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99767206_c68e_4e44_97ed_c6b406b5adb5.slice/crio-281f3b887c16918eca21c742e49f8fa97a14659de51e453022a61e6c38dfba14 WatchSource:0}: Error finding container 281f3b887c16918eca21c742e49f8fa97a14659de51e453022a61e6c38dfba14: Status 404 returned error can't find the container with id 281f3b887c16918eca21c742e49f8fa97a14659de51e453022a61e6c38dfba14 Nov 23 07:32:25 crc kubenswrapper[4988]: I1123 07:32:25.926855 4988 generic.go:334] "Generic (PLEG): container finished" podID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerID="07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e" exitCode=0 Nov 23 07:32:25 crc kubenswrapper[4988]: I1123 07:32:25.926942 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerDied","Data":"07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e"} Nov 23 07:32:25 crc kubenswrapper[4988]: I1123 07:32:25.926988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerStarted","Data":"281f3b887c16918eca21c742e49f8fa97a14659de51e453022a61e6c38dfba14"} Nov 23 07:32:26 crc kubenswrapper[4988]: I1123 07:32:26.945485 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerStarted","Data":"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045"} Nov 23 07:32:27 crc kubenswrapper[4988]: I1123 07:32:27.959467 4988 generic.go:334] "Generic (PLEG): container finished" podID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerID="e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045" exitCode=0 Nov 23 07:32:27 crc kubenswrapper[4988]: I1123 07:32:27.959527 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerDied","Data":"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045"} Nov 23 07:32:28 crc kubenswrapper[4988]: I1123 07:32:28.974437 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerStarted","Data":"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026"} Nov 23 07:32:29 crc kubenswrapper[4988]: I1123 07:32:29.010667 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fj9m" podStartSLOduration=2.552796392 podStartE2EDuration="5.010641917s" podCreationTimestamp="2025-11-23 07:32:24 +0000 UTC" firstStartedPulling="2025-11-23 07:32:25.929934664 +0000 UTC m=+2798.238447467" lastFinishedPulling="2025-11-23 07:32:28.387780229 +0000 UTC m=+2800.696292992" observedRunningTime="2025-11-23 07:32:29.001569536 +0000 UTC m=+2801.310082329" watchObservedRunningTime="2025-11-23 07:32:29.010641917 +0000 UTC m=+2801.319154710" Nov 23 07:32:34 crc kubenswrapper[4988]: I1123 07:32:34.821838 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:34 crc kubenswrapper[4988]: I1123 07:32:34.822701 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:34 crc kubenswrapper[4988]: I1123 07:32:34.880721 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:35 crc kubenswrapper[4988]: I1123 07:32:35.102774 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:35 crc kubenswrapper[4988]: I1123 07:32:35.164660 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.049444 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fj9m" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="registry-server" containerID="cri-o://a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026" gracePeriod=2 Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.481362 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.649187 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities\") pod \"99767206-c68e-4e44-97ed-c6b406b5adb5\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.649303 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7ckc\" (UniqueName: \"kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc\") pod \"99767206-c68e-4e44-97ed-c6b406b5adb5\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.649385 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content\") pod \"99767206-c68e-4e44-97ed-c6b406b5adb5\" (UID: \"99767206-c68e-4e44-97ed-c6b406b5adb5\") " Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.651673 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities" (OuterVolumeSpecName: "utilities") pod "99767206-c68e-4e44-97ed-c6b406b5adb5" (UID: "99767206-c68e-4e44-97ed-c6b406b5adb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.659470 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc" (OuterVolumeSpecName: "kube-api-access-b7ckc") pod "99767206-c68e-4e44-97ed-c6b406b5adb5" (UID: "99767206-c68e-4e44-97ed-c6b406b5adb5"). InnerVolumeSpecName "kube-api-access-b7ckc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.751031 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7ckc\" (UniqueName: \"kubernetes.io/projected/99767206-c68e-4e44-97ed-c6b406b5adb5-kube-api-access-b7ckc\") on node \"crc\" DevicePath \"\"" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.751094 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.950446 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99767206-c68e-4e44-97ed-c6b406b5adb5" (UID: "99767206-c68e-4e44-97ed-c6b406b5adb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:32:37 crc kubenswrapper[4988]: I1123 07:32:37.953693 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99767206-c68e-4e44-97ed-c6b406b5adb5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.062093 4988 generic.go:334] "Generic (PLEG): container finished" podID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerID="a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026" exitCode=0 Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.062142 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerDied","Data":"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026"} Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.062172 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fj9m" event={"ID":"99767206-c68e-4e44-97ed-c6b406b5adb5","Type":"ContainerDied","Data":"281f3b887c16918eca21c742e49f8fa97a14659de51e453022a61e6c38dfba14"} Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.062237 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fj9m" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.062235 4988 scope.go:117] "RemoveContainer" containerID="a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.097443 4988 scope.go:117] "RemoveContainer" containerID="e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.126379 4988 scope.go:117] "RemoveContainer" containerID="07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.176973 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.180842 4988 scope.go:117] "RemoveContainer" containerID="a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026" Nov 23 07:32:38 crc kubenswrapper[4988]: E1123 07:32:38.181387 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026\": container with ID starting with a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026 not found: ID does not exist" containerID="a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.181427 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026"} err="failed to get container status \"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026\": rpc error: code = NotFound desc = could not find container \"a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026\": container with ID starting with a35abb38b0372380b40153e6aa5019fe9e8ca312e7a96f80b583ad8f0ef3e026 not found: ID does not exist" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.181521 4988 scope.go:117] "RemoveContainer" containerID="e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045" Nov 23 07:32:38 crc kubenswrapper[4988]: E1123 07:32:38.182169 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045\": container with ID starting with e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045 not found: ID does not exist" containerID="e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.182215 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045"} err="failed to get container status \"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045\": rpc error: code = NotFound desc = could not find container \"e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045\": container with ID starting with e8c604660462b7815ff748407138f78532add9e59e653eeaadbeeec4fd684045 not found: ID does not exist" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.182235 4988 scope.go:117] "RemoveContainer" containerID="07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e" Nov 23 07:32:38 crc kubenswrapper[4988]: E1123 07:32:38.182978 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e\": container with ID starting with 07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e not found: ID does not exist" containerID="07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.183079 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e"} err="failed to get container status \"07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e\": rpc error: code = NotFound desc = could not find container \"07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e\": container with ID starting with 07eee997741cdbfa7918bb94b0fafbb1199ffa62d9b1f8dea8f9ebf90f8b6c1e not found: ID does not exist" Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.185342 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fj9m"] Nov 23 07:32:38 crc kubenswrapper[4988]: I1123 07:32:38.515962 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" path="/var/lib/kubelet/pods/99767206-c68e-4e44-97ed-c6b406b5adb5/volumes" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.986265 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7xr2d"] Nov 23 07:34:14 crc kubenswrapper[4988]: E1123 07:34:14.987700 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="extract-content" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.987730 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="extract-content" Nov 23 07:34:14 crc kubenswrapper[4988]: E1123 07:34:14.987782 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="extract-utilities" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.987796 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="extract-utilities" Nov 23 07:34:14 crc kubenswrapper[4988]: E1123 07:34:14.987815 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="registry-server" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.987827 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="registry-server" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.988078 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="99767206-c68e-4e44-97ed-c6b406b5adb5" containerName="registry-server" Nov 23 07:34:14 crc kubenswrapper[4988]: I1123 07:34:14.989864 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.009766 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7xr2d"] Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.043306 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-utilities\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.043704 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-catalog-content\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.043955 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj5l\" (UniqueName: \"kubernetes.io/projected/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-kube-api-access-vcj5l\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.146060 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-utilities\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.146481 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-catalog-content\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.146715 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj5l\" (UniqueName: \"kubernetes.io/projected/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-kube-api-access-vcj5l\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.146791 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-utilities\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.147168 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-catalog-content\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.165962 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj5l\" (UniqueName: \"kubernetes.io/projected/a82cd923-5d18-4b12-9a2b-bd52e81b68c4-kube-api-access-vcj5l\") pod \"community-operators-7xr2d\" (UID: \"a82cd923-5d18-4b12-9a2b-bd52e81b68c4\") " pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.320512 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:15 crc kubenswrapper[4988]: I1123 07:34:15.862999 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7xr2d"] Nov 23 07:34:16 crc kubenswrapper[4988]: I1123 07:34:16.697088 4988 generic.go:334] "Generic (PLEG): container finished" podID="a82cd923-5d18-4b12-9a2b-bd52e81b68c4" containerID="3507973f208bf209dcab64ca77fe7ddd3f8a21b627af9e4613807a084fc46866" exitCode=0 Nov 23 07:34:16 crc kubenswrapper[4988]: I1123 07:34:16.697144 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xr2d" event={"ID":"a82cd923-5d18-4b12-9a2b-bd52e81b68c4","Type":"ContainerDied","Data":"3507973f208bf209dcab64ca77fe7ddd3f8a21b627af9e4613807a084fc46866"} Nov 23 07:34:16 crc kubenswrapper[4988]: I1123 07:34:16.697169 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xr2d" event={"ID":"a82cd923-5d18-4b12-9a2b-bd52e81b68c4","Type":"ContainerStarted","Data":"45afa366031c6d36951681030956380534a0cee5425234b40fde6148ecc31f48"} Nov 23 07:34:20 crc kubenswrapper[4988]: I1123 07:34:20.731123 4988 generic.go:334] "Generic (PLEG): container finished" podID="a82cd923-5d18-4b12-9a2b-bd52e81b68c4" containerID="f4eac87343917fe0539c9005027614b2cf5900e57edab8c76e9b0f7210f9636d" exitCode=0 Nov 23 07:34:20 crc kubenswrapper[4988]: I1123 07:34:20.731258 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xr2d" event={"ID":"a82cd923-5d18-4b12-9a2b-bd52e81b68c4","Type":"ContainerDied","Data":"f4eac87343917fe0539c9005027614b2cf5900e57edab8c76e9b0f7210f9636d"} Nov 23 07:34:20 crc kubenswrapper[4988]: I1123 07:34:20.733990 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:34:21 crc kubenswrapper[4988]: I1123 07:34:21.747700 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7xr2d" event={"ID":"a82cd923-5d18-4b12-9a2b-bd52e81b68c4","Type":"ContainerStarted","Data":"09ab27f2bbde6fdcc2ccf924080fb1cc1f8dedb4adc55f800e0c0d7a380fdc4a"} Nov 23 07:34:21 crc kubenswrapper[4988]: I1123 07:34:21.779567 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7xr2d" podStartSLOduration=3.298063184 podStartE2EDuration="7.77954659s" podCreationTimestamp="2025-11-23 07:34:14 +0000 UTC" firstStartedPulling="2025-11-23 07:34:16.699824643 +0000 UTC m=+2909.008337406" lastFinishedPulling="2025-11-23 07:34:21.181308009 +0000 UTC m=+2913.489820812" observedRunningTime="2025-11-23 07:34:21.778554915 +0000 UTC m=+2914.087067718" watchObservedRunningTime="2025-11-23 07:34:21.77954659 +0000 UTC m=+2914.088059363" Nov 23 07:34:25 crc kubenswrapper[4988]: I1123 07:34:25.320784 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:25 crc kubenswrapper[4988]: I1123 07:34:25.320849 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:25 crc kubenswrapper[4988]: I1123 07:34:25.394593 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.399810 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7xr2d" Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.485928 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7xr2d"] Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.530371 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.530629 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jb5m" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="registry-server" containerID="cri-o://1b276497d5c675685f70e2bb62342cba771c77e0ae43c381262f1bde98310661" gracePeriod=2 Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.879956 4988 generic.go:334] "Generic (PLEG): container finished" podID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerID="1b276497d5c675685f70e2bb62342cba771c77e0ae43c381262f1bde98310661" exitCode=0 Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.880057 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerDied","Data":"1b276497d5c675685f70e2bb62342cba771c77e0ae43c381262f1bde98310661"} Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.880537 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jb5m" event={"ID":"e8d54713-f5b1-4f71-a8a1-8b604068e791","Type":"ContainerDied","Data":"fbc55164b9de17f1f951967353678e4fa2e00be2274ef43829bc7add273e6151"} Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.880567 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbc55164b9de17f1f951967353678e4fa2e00be2274ef43829bc7add273e6151" Nov 23 07:34:35 crc kubenswrapper[4988]: I1123 07:34:35.917244 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.036618 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content\") pod \"e8d54713-f5b1-4f71-a8a1-8b604068e791\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.036687 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities\") pod \"e8d54713-f5b1-4f71-a8a1-8b604068e791\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.036727 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2w9z\" (UniqueName: \"kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z\") pod \"e8d54713-f5b1-4f71-a8a1-8b604068e791\" (UID: \"e8d54713-f5b1-4f71-a8a1-8b604068e791\") " Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.038028 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities" (OuterVolumeSpecName: "utilities") pod "e8d54713-f5b1-4f71-a8a1-8b604068e791" (UID: "e8d54713-f5b1-4f71-a8a1-8b604068e791"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.050414 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z" (OuterVolumeSpecName: "kube-api-access-d2w9z") pod "e8d54713-f5b1-4f71-a8a1-8b604068e791" (UID: "e8d54713-f5b1-4f71-a8a1-8b604068e791"). InnerVolumeSpecName "kube-api-access-d2w9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.099686 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d54713-f5b1-4f71-a8a1-8b604068e791" (UID: "e8d54713-f5b1-4f71-a8a1-8b604068e791"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.139840 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.139918 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d54713-f5b1-4f71-a8a1-8b604068e791-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.139937 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2w9z\" (UniqueName: \"kubernetes.io/projected/e8d54713-f5b1-4f71-a8a1-8b604068e791-kube-api-access-d2w9z\") on node \"crc\" DevicePath \"\"" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.889503 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jb5m" Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.917107 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 07:34:36 crc kubenswrapper[4988]: I1123 07:34:36.924454 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jb5m"] Nov 23 07:34:38 crc kubenswrapper[4988]: I1123 07:34:38.511925 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" path="/var/lib/kubelet/pods/e8d54713-f5b1-4f71-a8a1-8b604068e791/volumes" Nov 23 07:34:51 crc kubenswrapper[4988]: I1123 07:34:51.672568 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:34:51 crc kubenswrapper[4988]: I1123 07:34:51.673220 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:34:56 crc kubenswrapper[4988]: I1123 07:34:56.049072 4988 scope.go:117] "RemoveContainer" containerID="f4b09f8b08292085eb13e84e0b106be140e691ffa4042f453c1c61a84eee00f2" Nov 23 07:34:56 crc kubenswrapper[4988]: I1123 07:34:56.097597 4988 scope.go:117] "RemoveContainer" containerID="597b114f4e8e69b912b9d346f30e68abbd144582f180126eaf9b784e82c425f1" Nov 23 07:34:56 crc kubenswrapper[4988]: I1123 07:34:56.135170 4988 scope.go:117] "RemoveContainer" containerID="1b276497d5c675685f70e2bb62342cba771c77e0ae43c381262f1bde98310661" Nov 23 07:35:21 crc kubenswrapper[4988]: I1123 07:35:21.673134 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:35:21 crc kubenswrapper[4988]: I1123 07:35:21.673859 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:35:51 crc kubenswrapper[4988]: I1123 07:35:51.672657 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:35:51 crc kubenswrapper[4988]: I1123 07:35:51.673372 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:35:51 crc kubenswrapper[4988]: I1123 07:35:51.673497 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:35:51 crc kubenswrapper[4988]: I1123 07:35:51.674853 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:35:51 crc kubenswrapper[4988]: I1123 07:35:51.674975 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" gracePeriod=600 Nov 23 07:35:51 crc kubenswrapper[4988]: E1123 07:35:51.803391 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:35:52 crc kubenswrapper[4988]: I1123 07:35:52.587554 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" exitCode=0 Nov 23 07:35:52 crc kubenswrapper[4988]: I1123 07:35:52.587625 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e"} Nov 23 07:35:52 crc kubenswrapper[4988]: I1123 07:35:52.587675 4988 scope.go:117] "RemoveContainer" containerID="b21cd6d3cf2eaac3dc33ac23608c5389665761d31ea85b0dbd282269f51fb71c" Nov 23 07:35:52 crc kubenswrapper[4988]: I1123 07:35:52.588575 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:35:52 crc kubenswrapper[4988]: E1123 07:35:52.588948 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:36:04 crc kubenswrapper[4988]: I1123 07:36:04.496727 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:36:04 crc kubenswrapper[4988]: E1123 07:36:04.498389 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:36:19 crc kubenswrapper[4988]: I1123 07:36:19.496407 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:36:19 crc kubenswrapper[4988]: E1123 07:36:19.497151 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:36:33 crc kubenswrapper[4988]: I1123 07:36:33.496415 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:36:33 crc kubenswrapper[4988]: E1123 07:36:33.497544 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:36:45 crc kubenswrapper[4988]: I1123 07:36:45.496264 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:36:45 crc kubenswrapper[4988]: E1123 07:36:45.497301 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:36:56 crc kubenswrapper[4988]: I1123 07:36:56.496157 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:36:56 crc kubenswrapper[4988]: E1123 07:36:56.496981 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:37:10 crc kubenswrapper[4988]: I1123 07:37:10.496367 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:37:10 crc kubenswrapper[4988]: E1123 07:37:10.497367 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:37:21 crc kubenswrapper[4988]: I1123 07:37:21.496578 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:37:21 crc kubenswrapper[4988]: E1123 07:37:21.497403 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:37:32 crc kubenswrapper[4988]: I1123 07:37:32.497414 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:37:32 crc kubenswrapper[4988]: E1123 07:37:32.498456 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:37:44 crc kubenswrapper[4988]: I1123 07:37:44.496677 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:37:44 crc kubenswrapper[4988]: E1123 07:37:44.497543 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:37:55 crc kubenswrapper[4988]: I1123 07:37:55.496696 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:37:55 crc kubenswrapper[4988]: E1123 07:37:55.498074 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:38:10 crc kubenswrapper[4988]: I1123 07:38:10.496345 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:38:10 crc kubenswrapper[4988]: E1123 07:38:10.497173 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:38:24 crc kubenswrapper[4988]: I1123 07:38:24.498121 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:38:24 crc kubenswrapper[4988]: E1123 07:38:24.499365 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:38:38 crc kubenswrapper[4988]: I1123 07:38:38.499886 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:38:38 crc kubenswrapper[4988]: E1123 07:38:38.500592 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:38:49 crc kubenswrapper[4988]: I1123 07:38:49.497959 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:38:49 crc kubenswrapper[4988]: E1123 07:38:49.499179 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:39:00 crc kubenswrapper[4988]: I1123 07:39:00.497577 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:39:00 crc kubenswrapper[4988]: E1123 07:39:00.498250 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:39:15 crc kubenswrapper[4988]: I1123 07:39:15.496147 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:39:15 crc kubenswrapper[4988]: E1123 07:39:15.497333 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:39:28 crc kubenswrapper[4988]: I1123 07:39:28.504618 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:39:28 crc kubenswrapper[4988]: E1123 07:39:28.505862 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:39:40 crc kubenswrapper[4988]: I1123 07:39:40.496311 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:39:40 crc kubenswrapper[4988]: E1123 07:39:40.499012 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:39:54 crc kubenswrapper[4988]: I1123 07:39:54.496333 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:39:54 crc kubenswrapper[4988]: E1123 07:39:54.498961 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:40:08 crc kubenswrapper[4988]: I1123 07:40:08.503534 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:40:08 crc kubenswrapper[4988]: E1123 07:40:08.506682 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:40:21 crc kubenswrapper[4988]: I1123 07:40:21.495948 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:40:21 crc kubenswrapper[4988]: E1123 07:40:21.497379 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.153178 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:27 crc kubenswrapper[4988]: E1123 07:40:27.153817 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="extract-utilities" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.153834 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="extract-utilities" Nov 23 07:40:27 crc kubenswrapper[4988]: E1123 07:40:27.153853 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="registry-server" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.153860 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="registry-server" Nov 23 07:40:27 crc kubenswrapper[4988]: E1123 07:40:27.153889 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="extract-content" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.153899 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="extract-content" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.154061 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d54713-f5b1-4f71-a8a1-8b604068e791" containerName="registry-server" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.155180 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.185928 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.262268 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.262315 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwqq\" (UniqueName: \"kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.262454 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.364301 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.364352 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwqq\" (UniqueName: \"kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.364381 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.364967 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.365010 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.383131 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwqq\" (UniqueName: \"kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq\") pod \"redhat-operators-b62fs\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.498476 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:27 crc kubenswrapper[4988]: I1123 07:40:27.943382 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:28 crc kubenswrapper[4988]: I1123 07:40:28.189250 4988 generic.go:334] "Generic (PLEG): container finished" podID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerID="dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1" exitCode=0 Nov 23 07:40:28 crc kubenswrapper[4988]: I1123 07:40:28.189317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerDied","Data":"dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1"} Nov 23 07:40:28 crc kubenswrapper[4988]: I1123 07:40:28.189348 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerStarted","Data":"550ce5faebe039b464edc300fd2606f9d7217c0e53e22cdd37a25840972aceb5"} Nov 23 07:40:28 crc kubenswrapper[4988]: I1123 07:40:28.191205 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:40:30 crc kubenswrapper[4988]: I1123 07:40:30.211071 4988 generic.go:334] "Generic (PLEG): container finished" podID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerID="850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00" exitCode=0 Nov 23 07:40:30 crc kubenswrapper[4988]: I1123 07:40:30.211149 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerDied","Data":"850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00"} Nov 23 07:40:31 crc kubenswrapper[4988]: I1123 07:40:31.227939 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerStarted","Data":"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f"} Nov 23 07:40:31 crc kubenswrapper[4988]: I1123 07:40:31.265899 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b62fs" podStartSLOduration=1.839238914 podStartE2EDuration="4.265880209s" podCreationTimestamp="2025-11-23 07:40:27 +0000 UTC" firstStartedPulling="2025-11-23 07:40:28.190969832 +0000 UTC m=+3280.499482585" lastFinishedPulling="2025-11-23 07:40:30.617611107 +0000 UTC m=+3282.926123880" observedRunningTime="2025-11-23 07:40:31.259961084 +0000 UTC m=+3283.568473877" watchObservedRunningTime="2025-11-23 07:40:31.265880209 +0000 UTC m=+3283.574392982" Nov 23 07:40:32 crc kubenswrapper[4988]: I1123 07:40:32.496800 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:40:32 crc kubenswrapper[4988]: E1123 07:40:32.497153 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:40:37 crc kubenswrapper[4988]: I1123 07:40:37.499182 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:37 crc kubenswrapper[4988]: I1123 07:40:37.499850 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:38 crc kubenswrapper[4988]: I1123 07:40:38.561527 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b62fs" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="registry-server" probeResult="failure" output=< Nov 23 07:40:38 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 07:40:38 crc kubenswrapper[4988]: > Nov 23 07:40:45 crc kubenswrapper[4988]: I1123 07:40:45.496440 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:40:45 crc kubenswrapper[4988]: E1123 07:40:45.497281 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:40:47 crc kubenswrapper[4988]: I1123 07:40:47.583817 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:47 crc kubenswrapper[4988]: I1123 07:40:47.653567 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:47 crc kubenswrapper[4988]: I1123 07:40:47.837932 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.424862 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b62fs" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="registry-server" containerID="cri-o://bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f" gracePeriod=2 Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.888576 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.951966 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities\") pod \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.952120 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwqq\" (UniqueName: \"kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq\") pod \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.952218 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content\") pod \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\" (UID: \"08946a2c-46a9-4ae4-b3a5-7a83355c0714\") " Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.953132 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities" (OuterVolumeSpecName: "utilities") pod "08946a2c-46a9-4ae4-b3a5-7a83355c0714" (UID: "08946a2c-46a9-4ae4-b3a5-7a83355c0714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:40:49 crc kubenswrapper[4988]: I1123 07:40:49.960019 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq" (OuterVolumeSpecName: "kube-api-access-nqwqq") pod "08946a2c-46a9-4ae4-b3a5-7a83355c0714" (UID: "08946a2c-46a9-4ae4-b3a5-7a83355c0714"). InnerVolumeSpecName "kube-api-access-nqwqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.054388 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.054428 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwqq\" (UniqueName: \"kubernetes.io/projected/08946a2c-46a9-4ae4-b3a5-7a83355c0714-kube-api-access-nqwqq\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.063517 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08946a2c-46a9-4ae4-b3a5-7a83355c0714" (UID: "08946a2c-46a9-4ae4-b3a5-7a83355c0714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.155765 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08946a2c-46a9-4ae4-b3a5-7a83355c0714-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.436777 4988 generic.go:334] "Generic (PLEG): container finished" podID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerID="bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f" exitCode=0 Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.436846 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b62fs" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.436841 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerDied","Data":"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f"} Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.437028 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b62fs" event={"ID":"08946a2c-46a9-4ae4-b3a5-7a83355c0714","Type":"ContainerDied","Data":"550ce5faebe039b464edc300fd2606f9d7217c0e53e22cdd37a25840972aceb5"} Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.437068 4988 scope.go:117] "RemoveContainer" containerID="bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.462815 4988 scope.go:117] "RemoveContainer" containerID="850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.515589 4988 scope.go:117] "RemoveContainer" containerID="dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.527343 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.527395 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b62fs"] Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.540811 4988 scope.go:117] "RemoveContainer" containerID="bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f" Nov 23 07:40:50 crc kubenswrapper[4988]: E1123 07:40:50.541483 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f\": container with ID starting with bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f not found: ID does not exist" containerID="bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.541532 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f"} err="failed to get container status \"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f\": rpc error: code = NotFound desc = could not find container \"bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f\": container with ID starting with bd7e8bec56b47848899c84424a3fa9b6fea32eadf8a48df11cd4ac2f4fd68f0f not found: ID does not exist" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.541566 4988 scope.go:117] "RemoveContainer" containerID="850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00" Nov 23 07:40:50 crc kubenswrapper[4988]: E1123 07:40:50.542166 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00\": container with ID starting with 850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00 not found: ID does not exist" containerID="850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.542362 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00"} err="failed to get container status \"850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00\": rpc error: code = NotFound desc = could not find container \"850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00\": container with ID starting with 850cb506022a1ac10ab552a62f44b653d6b6419f9cd18cdcb1036e9ebb109d00 not found: ID does not exist" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.542535 4988 scope.go:117] "RemoveContainer" containerID="dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1" Nov 23 07:40:50 crc kubenswrapper[4988]: E1123 07:40:50.542995 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1\": container with ID starting with dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1 not found: ID does not exist" containerID="dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1" Nov 23 07:40:50 crc kubenswrapper[4988]: I1123 07:40:50.543056 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1"} err="failed to get container status \"dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1\": rpc error: code = NotFound desc = could not find container \"dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1\": container with ID starting with dcf8742f9b30d2923564f75420c76f2a11e72ae1206692e35a2ae3982081e0f1 not found: ID does not exist" Nov 23 07:40:52 crc kubenswrapper[4988]: I1123 07:40:52.506144 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" path="/var/lib/kubelet/pods/08946a2c-46a9-4ae4-b3a5-7a83355c0714/volumes" Nov 23 07:40:59 crc kubenswrapper[4988]: I1123 07:40:59.496951 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:41:00 crc kubenswrapper[4988]: I1123 07:41:00.584309 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654"} Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.224147 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:10 crc kubenswrapper[4988]: E1123 07:42:10.225509 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="extract-utilities" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.225542 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="extract-utilities" Nov 23 07:42:10 crc kubenswrapper[4988]: E1123 07:42:10.225591 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="extract-content" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.225610 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="extract-content" Nov 23 07:42:10 crc kubenswrapper[4988]: E1123 07:42:10.225653 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="registry-server" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.225671 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="registry-server" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.225981 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="08946a2c-46a9-4ae4-b3a5-7a83355c0714" containerName="registry-server" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.228095 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.243689 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.425792 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtnr\" (UniqueName: \"kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.426165 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.426348 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.527341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtnr\" (UniqueName: \"kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.527399 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.527436 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.528256 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.528273 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.548075 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtnr\" (UniqueName: \"kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr\") pod \"redhat-marketplace-jc6rt\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:10 crc kubenswrapper[4988]: I1123 07:42:10.551805 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:11 crc kubenswrapper[4988]: I1123 07:42:11.046591 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:11 crc kubenswrapper[4988]: I1123 07:42:11.235371 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerStarted","Data":"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d"} Nov 23 07:42:11 crc kubenswrapper[4988]: I1123 07:42:11.235424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerStarted","Data":"3337d7205f321aefd02c4a619c9c5162d8d6aeac8221dec81c00283dcccf86b5"} Nov 23 07:42:12 crc kubenswrapper[4988]: I1123 07:42:12.244538 4988 generic.go:334] "Generic (PLEG): container finished" podID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerID="1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d" exitCode=0 Nov 23 07:42:12 crc kubenswrapper[4988]: I1123 07:42:12.244973 4988 generic.go:334] "Generic (PLEG): container finished" podID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerID="89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1" exitCode=0 Nov 23 07:42:12 crc kubenswrapper[4988]: I1123 07:42:12.244658 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerDied","Data":"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d"} Nov 23 07:42:12 crc kubenswrapper[4988]: I1123 07:42:12.245038 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerDied","Data":"89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1"} Nov 23 07:42:13 crc kubenswrapper[4988]: I1123 07:42:13.257620 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerStarted","Data":"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54"} Nov 23 07:42:13 crc kubenswrapper[4988]: I1123 07:42:13.281780 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jc6rt" podStartSLOduration=1.887452207 podStartE2EDuration="3.281760123s" podCreationTimestamp="2025-11-23 07:42:10 +0000 UTC" firstStartedPulling="2025-11-23 07:42:11.238053556 +0000 UTC m=+3383.546566319" lastFinishedPulling="2025-11-23 07:42:12.632361452 +0000 UTC m=+3384.940874235" observedRunningTime="2025-11-23 07:42:13.273510461 +0000 UTC m=+3385.582023254" watchObservedRunningTime="2025-11-23 07:42:13.281760123 +0000 UTC m=+3385.590272896" Nov 23 07:42:20 crc kubenswrapper[4988]: I1123 07:42:20.552272 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:20 crc kubenswrapper[4988]: I1123 07:42:20.552915 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:20 crc kubenswrapper[4988]: I1123 07:42:20.621366 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:21 crc kubenswrapper[4988]: I1123 07:42:21.395910 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:21 crc kubenswrapper[4988]: I1123 07:42:21.459684 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.340794 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jc6rt" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="registry-server" containerID="cri-o://916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54" gracePeriod=2 Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.786750 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.950697 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtnr\" (UniqueName: \"kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr\") pod \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.950780 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities\") pod \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.950817 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content\") pod \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\" (UID: \"128f25cd-8696-4c73-98c1-9c9abb0ce91a\") " Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.952471 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities" (OuterVolumeSpecName: "utilities") pod "128f25cd-8696-4c73-98c1-9c9abb0ce91a" (UID: "128f25cd-8696-4c73-98c1-9c9abb0ce91a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.965606 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr" (OuterVolumeSpecName: "kube-api-access-cwtnr") pod "128f25cd-8696-4c73-98c1-9c9abb0ce91a" (UID: "128f25cd-8696-4c73-98c1-9c9abb0ce91a"). InnerVolumeSpecName "kube-api-access-cwtnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:42:23 crc kubenswrapper[4988]: I1123 07:42:23.992961 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "128f25cd-8696-4c73-98c1-9c9abb0ce91a" (UID: "128f25cd-8696-4c73-98c1-9c9abb0ce91a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.053079 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtnr\" (UniqueName: \"kubernetes.io/projected/128f25cd-8696-4c73-98c1-9c9abb0ce91a-kube-api-access-cwtnr\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.053133 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.053152 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128f25cd-8696-4c73-98c1-9c9abb0ce91a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.355666 4988 generic.go:334] "Generic (PLEG): container finished" podID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerID="916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54" exitCode=0 Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.355722 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerDied","Data":"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54"} Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.355785 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jc6rt" event={"ID":"128f25cd-8696-4c73-98c1-9c9abb0ce91a","Type":"ContainerDied","Data":"3337d7205f321aefd02c4a619c9c5162d8d6aeac8221dec81c00283dcccf86b5"} Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.355817 4988 scope.go:117] "RemoveContainer" containerID="916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.355824 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jc6rt" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.392219 4988 scope.go:117] "RemoveContainer" containerID="89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.422510 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.424096 4988 scope.go:117] "RemoveContainer" containerID="1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.429415 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jc6rt"] Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.459828 4988 scope.go:117] "RemoveContainer" containerID="916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54" Nov 23 07:42:24 crc kubenswrapper[4988]: E1123 07:42:24.460423 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54\": container with ID starting with 916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54 not found: ID does not exist" containerID="916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.460475 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54"} err="failed to get container status \"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54\": rpc error: code = NotFound desc = could not find container \"916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54\": container with ID starting with 916297928b5fd51d875e68a787c712b5feb1b70c267037b46c52b2a727f2de54 not found: ID does not exist" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.460507 4988 scope.go:117] "RemoveContainer" containerID="89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1" Nov 23 07:42:24 crc kubenswrapper[4988]: E1123 07:42:24.460990 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1\": container with ID starting with 89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1 not found: ID does not exist" containerID="89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.461075 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1"} err="failed to get container status \"89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1\": rpc error: code = NotFound desc = could not find container \"89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1\": container with ID starting with 89469c5daf1c3687097bc0303a009cfae9f1213b24dd7fddcc16f5ece5daf3f1 not found: ID does not exist" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.461130 4988 scope.go:117] "RemoveContainer" containerID="1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d" Nov 23 07:42:24 crc kubenswrapper[4988]: E1123 07:42:24.461623 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d\": container with ID starting with 1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d not found: ID does not exist" containerID="1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.461669 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d"} err="failed to get container status \"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d\": rpc error: code = NotFound desc = could not find container \"1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d\": container with ID starting with 1ec679be00a03ec951ae59254963c935700149068cb063dd38a29e825f480c4d not found: ID does not exist" Nov 23 07:42:24 crc kubenswrapper[4988]: I1123 07:42:24.506742 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" path="/var/lib/kubelet/pods/128f25cd-8696-4c73-98c1-9c9abb0ce91a/volumes" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.615099 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:38 crc kubenswrapper[4988]: E1123 07:42:38.616605 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="extract-content" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.616641 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="extract-content" Nov 23 07:42:38 crc kubenswrapper[4988]: E1123 07:42:38.616673 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="extract-utilities" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.616690 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="extract-utilities" Nov 23 07:42:38 crc kubenswrapper[4988]: E1123 07:42:38.616737 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="registry-server" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.616754 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="registry-server" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.617119 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="128f25cd-8696-4c73-98c1-9c9abb0ce91a" containerName="registry-server" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.619365 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.632684 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.713797 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrrc\" (UniqueName: \"kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.713919 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.713970 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.815960 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrrc\" (UniqueName: \"kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.816021 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.816040 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.816534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.816696 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.839216 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrrc\" (UniqueName: \"kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc\") pod \"certified-operators-69qcx\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:38 crc kubenswrapper[4988]: I1123 07:42:38.956236 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:39 crc kubenswrapper[4988]: I1123 07:42:39.422771 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:39 crc kubenswrapper[4988]: I1123 07:42:39.508776 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerStarted","Data":"4b4c2a85db29dcb8b66ee9980b9ed2aa66e6b3abd60b273006c9edb97a768224"} Nov 23 07:42:40 crc kubenswrapper[4988]: I1123 07:42:40.519050 4988 generic.go:334] "Generic (PLEG): container finished" podID="60102b09-3712-4553-bff5-892d580c24ec" containerID="cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873" exitCode=0 Nov 23 07:42:40 crc kubenswrapper[4988]: I1123 07:42:40.519141 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerDied","Data":"cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873"} Nov 23 07:42:41 crc kubenswrapper[4988]: I1123 07:42:41.533665 4988 generic.go:334] "Generic (PLEG): container finished" podID="60102b09-3712-4553-bff5-892d580c24ec" containerID="6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e" exitCode=0 Nov 23 07:42:41 crc kubenswrapper[4988]: I1123 07:42:41.533784 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerDied","Data":"6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e"} Nov 23 07:42:42 crc kubenswrapper[4988]: I1123 07:42:42.548993 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerStarted","Data":"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48"} Nov 23 07:42:42 crc kubenswrapper[4988]: I1123 07:42:42.588773 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-69qcx" podStartSLOduration=3.188435627 podStartE2EDuration="4.588738989s" podCreationTimestamp="2025-11-23 07:42:38 +0000 UTC" firstStartedPulling="2025-11-23 07:42:40.521255281 +0000 UTC m=+3412.829768064" lastFinishedPulling="2025-11-23 07:42:41.921558653 +0000 UTC m=+3414.230071426" observedRunningTime="2025-11-23 07:42:42.583343477 +0000 UTC m=+3414.891856270" watchObservedRunningTime="2025-11-23 07:42:42.588738989 +0000 UTC m=+3414.897251782" Nov 23 07:42:48 crc kubenswrapper[4988]: I1123 07:42:48.957030 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:48 crc kubenswrapper[4988]: I1123 07:42:48.957843 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:49 crc kubenswrapper[4988]: I1123 07:42:49.022159 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:49 crc kubenswrapper[4988]: I1123 07:42:49.686675 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:49 crc kubenswrapper[4988]: I1123 07:42:49.763185 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:51 crc kubenswrapper[4988]: I1123 07:42:51.628381 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-69qcx" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="registry-server" containerID="cri-o://409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48" gracePeriod=2 Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.201948 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.333609 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content\") pod \"60102b09-3712-4553-bff5-892d580c24ec\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.334030 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities\") pod \"60102b09-3712-4553-bff5-892d580c24ec\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.334062 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrrc\" (UniqueName: \"kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc\") pod \"60102b09-3712-4553-bff5-892d580c24ec\" (UID: \"60102b09-3712-4553-bff5-892d580c24ec\") " Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.334793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities" (OuterVolumeSpecName: "utilities") pod "60102b09-3712-4553-bff5-892d580c24ec" (UID: "60102b09-3712-4553-bff5-892d580c24ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.344656 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc" (OuterVolumeSpecName: "kube-api-access-slrrc") pod "60102b09-3712-4553-bff5-892d580c24ec" (UID: "60102b09-3712-4553-bff5-892d580c24ec"). InnerVolumeSpecName "kube-api-access-slrrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.384233 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60102b09-3712-4553-bff5-892d580c24ec" (UID: "60102b09-3712-4553-bff5-892d580c24ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.436065 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrrc\" (UniqueName: \"kubernetes.io/projected/60102b09-3712-4553-bff5-892d580c24ec-kube-api-access-slrrc\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.436145 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.436159 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60102b09-3712-4553-bff5-892d580c24ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.641445 4988 generic.go:334] "Generic (PLEG): container finished" podID="60102b09-3712-4553-bff5-892d580c24ec" containerID="409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48" exitCode=0 Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.641508 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerDied","Data":"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48"} Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.641608 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-69qcx" event={"ID":"60102b09-3712-4553-bff5-892d580c24ec","Type":"ContainerDied","Data":"4b4c2a85db29dcb8b66ee9980b9ed2aa66e6b3abd60b273006c9edb97a768224"} Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.641534 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-69qcx" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.641650 4988 scope.go:117] "RemoveContainer" containerID="409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.672224 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.682441 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-69qcx"] Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.688898 4988 scope.go:117] "RemoveContainer" containerID="6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.720439 4988 scope.go:117] "RemoveContainer" containerID="cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.764383 4988 scope.go:117] "RemoveContainer" containerID="409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48" Nov 23 07:42:52 crc kubenswrapper[4988]: E1123 07:42:52.764888 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48\": container with ID starting with 409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48 not found: ID does not exist" containerID="409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.764942 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48"} err="failed to get container status \"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48\": rpc error: code = NotFound desc = could not find container \"409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48\": container with ID starting with 409ac6eb43208f2ccad099d623e9d2b93521201c7ca17dab10d913a340753d48 not found: ID does not exist" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.764977 4988 scope.go:117] "RemoveContainer" containerID="6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e" Nov 23 07:42:52 crc kubenswrapper[4988]: E1123 07:42:52.765424 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e\": container with ID starting with 6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e not found: ID does not exist" containerID="6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.765474 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e"} err="failed to get container status \"6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e\": rpc error: code = NotFound desc = could not find container \"6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e\": container with ID starting with 6e14a48be312eadc639f643863ea3bf069a60795a032844843491375c1f6a16e not found: ID does not exist" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.765509 4988 scope.go:117] "RemoveContainer" containerID="cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873" Nov 23 07:42:52 crc kubenswrapper[4988]: E1123 07:42:52.765842 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873\": container with ID starting with cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873 not found: ID does not exist" containerID="cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873" Nov 23 07:42:52 crc kubenswrapper[4988]: I1123 07:42:52.765877 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873"} err="failed to get container status \"cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873\": rpc error: code = NotFound desc = could not find container \"cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873\": container with ID starting with cd0b72d17789f7ad3e615f0b28bb27238f7eb9f1208fd707e5bddd8d2d2ee873 not found: ID does not exist" Nov 23 07:42:54 crc kubenswrapper[4988]: I1123 07:42:54.511460 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60102b09-3712-4553-bff5-892d580c24ec" path="/var/lib/kubelet/pods/60102b09-3712-4553-bff5-892d580c24ec/volumes" Nov 23 07:43:21 crc kubenswrapper[4988]: I1123 07:43:21.672503 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:43:21 crc kubenswrapper[4988]: I1123 07:43:21.673166 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:43:51 crc kubenswrapper[4988]: I1123 07:43:51.672373 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:43:51 crc kubenswrapper[4988]: I1123 07:43:51.673469 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:44:21 crc kubenswrapper[4988]: I1123 07:44:21.672252 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:44:21 crc kubenswrapper[4988]: I1123 07:44:21.672927 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:44:21 crc kubenswrapper[4988]: I1123 07:44:21.672997 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:44:21 crc kubenswrapper[4988]: I1123 07:44:21.673698 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:44:21 crc kubenswrapper[4988]: I1123 07:44:21.673818 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654" gracePeriod=600 Nov 23 07:44:22 crc kubenswrapper[4988]: I1123 07:44:22.531758 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654" exitCode=0 Nov 23 07:44:22 crc kubenswrapper[4988]: I1123 07:44:22.532391 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654"} Nov 23 07:44:22 crc kubenswrapper[4988]: I1123 07:44:22.532452 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98"} Nov 23 07:44:22 crc kubenswrapper[4988]: I1123 07:44:22.532473 4988 scope.go:117] "RemoveContainer" containerID="4858c0596747bff9eba4a7cd8ceecfdb0351c95e747b60d948dffa0e7317042e" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.219517 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd"] Nov 23 07:45:00 crc kubenswrapper[4988]: E1123 07:45:00.220433 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.220452 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4988]: E1123 07:45:00.220478 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.220486 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[4988]: E1123 07:45:00.220510 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.220518 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.220701 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="60102b09-3712-4553-bff5-892d580c24ec" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.221329 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.225865 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.226025 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.230142 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd"] Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.376167 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kzr\" (UniqueName: \"kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.376218 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.376535 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.478519 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.478619 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24kzr\" (UniqueName: \"kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.478640 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.480619 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.491598 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.497135 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24kzr\" (UniqueName: \"kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr\") pod \"collect-profiles-29398065-sljhd\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.542983 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.800089 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd"] Nov 23 07:45:00 crc kubenswrapper[4988]: W1123 07:45:00.809153 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fa3a745_9507_4b82_80cf_1f42a0c39e84.slice/crio-a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5 WatchSource:0}: Error finding container a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5: Status 404 returned error can't find the container with id a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5 Nov 23 07:45:00 crc kubenswrapper[4988]: I1123 07:45:00.912784 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" event={"ID":"0fa3a745-9507-4b82-80cf-1f42a0c39e84","Type":"ContainerStarted","Data":"a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5"} Nov 23 07:45:01 crc kubenswrapper[4988]: I1123 07:45:01.925833 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fa3a745-9507-4b82-80cf-1f42a0c39e84" containerID="de13ced4d5a1ad77a551d27ce5cdfa3c5981e7903714790bd188451546d3b5d5" exitCode=0 Nov 23 07:45:01 crc kubenswrapper[4988]: I1123 07:45:01.925906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" event={"ID":"0fa3a745-9507-4b82-80cf-1f42a0c39e84","Type":"ContainerDied","Data":"de13ced4d5a1ad77a551d27ce5cdfa3c5981e7903714790bd188451546d3b5d5"} Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.275782 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.421863 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24kzr\" (UniqueName: \"kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr\") pod \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.421912 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume\") pod \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.421994 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume\") pod \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\" (UID: \"0fa3a745-9507-4b82-80cf-1f42a0c39e84\") " Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.422926 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fa3a745-9507-4b82-80cf-1f42a0c39e84" (UID: "0fa3a745-9507-4b82-80cf-1f42a0c39e84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.434728 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr" (OuterVolumeSpecName: "kube-api-access-24kzr") pod "0fa3a745-9507-4b82-80cf-1f42a0c39e84" (UID: "0fa3a745-9507-4b82-80cf-1f42a0c39e84"). InnerVolumeSpecName "kube-api-access-24kzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.435159 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fa3a745-9507-4b82-80cf-1f42a0c39e84" (UID: "0fa3a745-9507-4b82-80cf-1f42a0c39e84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.524270 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24kzr\" (UniqueName: \"kubernetes.io/projected/0fa3a745-9507-4b82-80cf-1f42a0c39e84-kube-api-access-24kzr\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.524312 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa3a745-9507-4b82-80cf-1f42a0c39e84-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.524331 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fa3a745-9507-4b82-80cf-1f42a0c39e84-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.957319 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" event={"ID":"0fa3a745-9507-4b82-80cf-1f42a0c39e84","Type":"ContainerDied","Data":"a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5"} Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.957388 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a55710e5cfd871139b62c750d4ef0c5d5563cc911adefb01922dc9bf5f53f6e5" Nov 23 07:45:03 crc kubenswrapper[4988]: I1123 07:45:03.957486 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd" Nov 23 07:45:04 crc kubenswrapper[4988]: I1123 07:45:04.384754 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd"] Nov 23 07:45:04 crc kubenswrapper[4988]: I1123 07:45:04.392370 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-kxgmd"] Nov 23 07:45:04 crc kubenswrapper[4988]: I1123 07:45:04.503490 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1530b5-0204-4087-b649-5bdc2c82d76d" path="/var/lib/kubelet/pods/df1530b5-0204-4087-b649-5bdc2c82d76d/volumes" Nov 23 07:45:16 crc kubenswrapper[4988]: I1123 07:45:16.965998 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:16 crc kubenswrapper[4988]: E1123 07:45:16.967378 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa3a745-9507-4b82-80cf-1f42a0c39e84" containerName="collect-profiles" Nov 23 07:45:16 crc kubenswrapper[4988]: I1123 07:45:16.967410 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa3a745-9507-4b82-80cf-1f42a0c39e84" containerName="collect-profiles" Nov 23 07:45:16 crc kubenswrapper[4988]: I1123 07:45:16.967773 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa3a745-9507-4b82-80cf-1f42a0c39e84" containerName="collect-profiles" Nov 23 07:45:16 crc kubenswrapper[4988]: I1123 07:45:16.969647 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:16 crc kubenswrapper[4988]: I1123 07:45:16.978138 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.148787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56m9q\" (UniqueName: \"kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.148849 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.148877 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.250571 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56m9q\" (UniqueName: \"kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.250694 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.250746 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.251426 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.251436 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.273057 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56m9q\" (UniqueName: \"kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q\") pod \"community-operators-ckwsw\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.303455 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:17 crc kubenswrapper[4988]: I1123 07:45:17.844408 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:18 crc kubenswrapper[4988]: I1123 07:45:18.078604 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerStarted","Data":"38346e5e7aa704b59b87700ce0e3430f30dbb05570ead900b999c710bf2d1686"} Nov 23 07:45:19 crc kubenswrapper[4988]: I1123 07:45:19.092368 4988 generic.go:334] "Generic (PLEG): container finished" podID="9382f6b2-556f-413e-b55d-e57a72090743" containerID="2e8f33ee119645e903059176724081c732b6a69046df0cf39ce07fcd42761627" exitCode=0 Nov 23 07:45:19 crc kubenswrapper[4988]: I1123 07:45:19.092431 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerDied","Data":"2e8f33ee119645e903059176724081c732b6a69046df0cf39ce07fcd42761627"} Nov 23 07:45:20 crc kubenswrapper[4988]: I1123 07:45:20.102926 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerStarted","Data":"0b031d659cc135ee5a4d57e884d0cf7e3466216aaf7842105fea1f3fbe63020c"} Nov 23 07:45:21 crc kubenswrapper[4988]: I1123 07:45:21.119849 4988 generic.go:334] "Generic (PLEG): container finished" podID="9382f6b2-556f-413e-b55d-e57a72090743" containerID="0b031d659cc135ee5a4d57e884d0cf7e3466216aaf7842105fea1f3fbe63020c" exitCode=0 Nov 23 07:45:21 crc kubenswrapper[4988]: I1123 07:45:21.119977 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerDied","Data":"0b031d659cc135ee5a4d57e884d0cf7e3466216aaf7842105fea1f3fbe63020c"} Nov 23 07:45:22 crc kubenswrapper[4988]: I1123 07:45:22.129308 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerStarted","Data":"c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49"} Nov 23 07:45:22 crc kubenswrapper[4988]: I1123 07:45:22.156543 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckwsw" podStartSLOduration=3.747434987 podStartE2EDuration="6.156521248s" podCreationTimestamp="2025-11-23 07:45:16 +0000 UTC" firstStartedPulling="2025-11-23 07:45:19.095994394 +0000 UTC m=+3571.404507157" lastFinishedPulling="2025-11-23 07:45:21.505080655 +0000 UTC m=+3573.813593418" observedRunningTime="2025-11-23 07:45:22.151754481 +0000 UTC m=+3574.460267294" watchObservedRunningTime="2025-11-23 07:45:22.156521248 +0000 UTC m=+3574.465034021" Nov 23 07:45:27 crc kubenswrapper[4988]: I1123 07:45:27.304798 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:27 crc kubenswrapper[4988]: I1123 07:45:27.305276 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:27 crc kubenswrapper[4988]: I1123 07:45:27.366601 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:28 crc kubenswrapper[4988]: I1123 07:45:28.247961 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:31 crc kubenswrapper[4988]: I1123 07:45:31.908310 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:31 crc kubenswrapper[4988]: I1123 07:45:31.908820 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ckwsw" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="registry-server" containerID="cri-o://c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49" gracePeriod=2 Nov 23 07:45:32 crc kubenswrapper[4988]: E1123 07:45:32.084777 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9382f6b2_556f_413e_b55d_e57a72090743.slice/crio-conmon-c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.227728 4988 generic.go:334] "Generic (PLEG): container finished" podID="9382f6b2-556f-413e-b55d-e57a72090743" containerID="c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49" exitCode=0 Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.227795 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerDied","Data":"c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49"} Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.334453 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.356766 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56m9q\" (UniqueName: \"kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q\") pod \"9382f6b2-556f-413e-b55d-e57a72090743\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.356844 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities\") pod \"9382f6b2-556f-413e-b55d-e57a72090743\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.356915 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content\") pod \"9382f6b2-556f-413e-b55d-e57a72090743\" (UID: \"9382f6b2-556f-413e-b55d-e57a72090743\") " Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.357763 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities" (OuterVolumeSpecName: "utilities") pod "9382f6b2-556f-413e-b55d-e57a72090743" (UID: "9382f6b2-556f-413e-b55d-e57a72090743"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.358833 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.415404 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q" (OuterVolumeSpecName: "kube-api-access-56m9q") pod "9382f6b2-556f-413e-b55d-e57a72090743" (UID: "9382f6b2-556f-413e-b55d-e57a72090743"). InnerVolumeSpecName "kube-api-access-56m9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.441137 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9382f6b2-556f-413e-b55d-e57a72090743" (UID: "9382f6b2-556f-413e-b55d-e57a72090743"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.459926 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9382f6b2-556f-413e-b55d-e57a72090743-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:32 crc kubenswrapper[4988]: I1123 07:45:32.459962 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56m9q\" (UniqueName: \"kubernetes.io/projected/9382f6b2-556f-413e-b55d-e57a72090743-kube-api-access-56m9q\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.243686 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckwsw" event={"ID":"9382f6b2-556f-413e-b55d-e57a72090743","Type":"ContainerDied","Data":"38346e5e7aa704b59b87700ce0e3430f30dbb05570ead900b999c710bf2d1686"} Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.243793 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckwsw" Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.244260 4988 scope.go:117] "RemoveContainer" containerID="c60db2f14081dbb796f6e6e3624d16ae0e0d972e1cd005c2dab932e3473d3a49" Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.279110 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.281625 4988 scope.go:117] "RemoveContainer" containerID="0b031d659cc135ee5a4d57e884d0cf7e3466216aaf7842105fea1f3fbe63020c" Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.287969 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ckwsw"] Nov 23 07:45:33 crc kubenswrapper[4988]: I1123 07:45:33.307519 4988 scope.go:117] "RemoveContainer" containerID="2e8f33ee119645e903059176724081c732b6a69046df0cf39ce07fcd42761627" Nov 23 07:45:34 crc kubenswrapper[4988]: I1123 07:45:34.518664 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9382f6b2-556f-413e-b55d-e57a72090743" path="/var/lib/kubelet/pods/9382f6b2-556f-413e-b55d-e57a72090743/volumes" Nov 23 07:45:56 crc kubenswrapper[4988]: I1123 07:45:56.402961 4988 scope.go:117] "RemoveContainer" containerID="0eb43bd1b1ab4f381b4ce368f02272f9c8ad2cca95069b9b781db0a4b79dc116" Nov 23 07:46:51 crc kubenswrapper[4988]: I1123 07:46:51.672548 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:46:51 crc kubenswrapper[4988]: I1123 07:46:51.673249 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:47:21 crc kubenswrapper[4988]: I1123 07:47:21.672583 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:47:21 crc kubenswrapper[4988]: I1123 07:47:21.673319 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.672646 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.673361 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.673417 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.674089 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.674180 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" gracePeriod=600 Nov 23 07:47:51 crc kubenswrapper[4988]: E1123 07:47:51.820743 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.974753 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" exitCode=0 Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.974820 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98"} Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.974869 4988 scope.go:117] "RemoveContainer" containerID="353011342cd59ed326f2d4ddeec78a9773bdb0e44b28542d357a126781b91654" Nov 23 07:47:51 crc kubenswrapper[4988]: I1123 07:47:51.975617 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:47:51 crc kubenswrapper[4988]: E1123 07:47:51.975982 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:48:05 crc kubenswrapper[4988]: I1123 07:48:05.496409 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:48:05 crc kubenswrapper[4988]: E1123 07:48:05.497372 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:48:17 crc kubenswrapper[4988]: I1123 07:48:17.495855 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:48:17 crc kubenswrapper[4988]: E1123 07:48:17.496990 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:48:28 crc kubenswrapper[4988]: I1123 07:48:28.505396 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:48:28 crc kubenswrapper[4988]: E1123 07:48:28.506569 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:48:43 crc kubenswrapper[4988]: I1123 07:48:43.497030 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:48:43 crc kubenswrapper[4988]: E1123 07:48:43.498441 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:48:55 crc kubenswrapper[4988]: I1123 07:48:55.496413 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:48:55 crc kubenswrapper[4988]: E1123 07:48:55.497456 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:49:06 crc kubenswrapper[4988]: I1123 07:49:06.497241 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:49:06 crc kubenswrapper[4988]: E1123 07:49:06.499309 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:49:19 crc kubenswrapper[4988]: I1123 07:49:19.495858 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:49:19 crc kubenswrapper[4988]: E1123 07:49:19.496844 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:49:31 crc kubenswrapper[4988]: I1123 07:49:31.496043 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:49:31 crc kubenswrapper[4988]: E1123 07:49:31.497012 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:49:43 crc kubenswrapper[4988]: I1123 07:49:43.495767 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:49:43 crc kubenswrapper[4988]: E1123 07:49:43.496900 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:49:55 crc kubenswrapper[4988]: I1123 07:49:55.499303 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:49:55 crc kubenswrapper[4988]: E1123 07:49:55.501131 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:50:07 crc kubenswrapper[4988]: I1123 07:50:07.496727 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:50:07 crc kubenswrapper[4988]: E1123 07:50:07.499640 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:50:21 crc kubenswrapper[4988]: I1123 07:50:21.496821 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:50:21 crc kubenswrapper[4988]: E1123 07:50:21.497966 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:50:36 crc kubenswrapper[4988]: I1123 07:50:36.496155 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:50:36 crc kubenswrapper[4988]: E1123 07:50:36.497292 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.197857 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:50:44 crc kubenswrapper[4988]: E1123 07:50:44.199351 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="extract-content" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.199387 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="extract-content" Nov 23 07:50:44 crc kubenswrapper[4988]: E1123 07:50:44.199434 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="registry-server" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.199452 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="registry-server" Nov 23 07:50:44 crc kubenswrapper[4988]: E1123 07:50:44.199475 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="extract-utilities" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.199493 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="extract-utilities" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.199817 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9382f6b2-556f-413e-b55d-e57a72090743" containerName="registry-server" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.202356 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.215732 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.247786 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.248302 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.248659 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqd8t\" (UniqueName: \"kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.350489 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.351078 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqd8t\" (UniqueName: \"kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.351401 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.352179 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.352294 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.381398 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqd8t\" (UniqueName: \"kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t\") pod \"redhat-operators-dzxnx\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:44 crc kubenswrapper[4988]: I1123 07:50:44.561769 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:45 crc kubenswrapper[4988]: I1123 07:50:45.021710 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:50:45 crc kubenswrapper[4988]: I1123 07:50:45.683926 4988 generic.go:334] "Generic (PLEG): container finished" podID="eda20535-1abd-4978-8be3-845939d4288f" containerID="71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485" exitCode=0 Nov 23 07:50:45 crc kubenswrapper[4988]: I1123 07:50:45.683985 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerDied","Data":"71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485"} Nov 23 07:50:45 crc kubenswrapper[4988]: I1123 07:50:45.684331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerStarted","Data":"eccd29533d31d281fd54917836b6a5f7be7a654a33b38987f049ebfd54716ece"} Nov 23 07:50:45 crc kubenswrapper[4988]: I1123 07:50:45.688471 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:50:46 crc kubenswrapper[4988]: I1123 07:50:46.695745 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerStarted","Data":"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75"} Nov 23 07:50:47 crc kubenswrapper[4988]: I1123 07:50:47.707041 4988 generic.go:334] "Generic (PLEG): container finished" podID="eda20535-1abd-4978-8be3-845939d4288f" containerID="f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75" exitCode=0 Nov 23 07:50:47 crc kubenswrapper[4988]: I1123 07:50:47.707307 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerDied","Data":"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75"} Nov 23 07:50:48 crc kubenswrapper[4988]: I1123 07:50:48.504762 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:50:48 crc kubenswrapper[4988]: E1123 07:50:48.506106 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:50:48 crc kubenswrapper[4988]: I1123 07:50:48.723676 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerStarted","Data":"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9"} Nov 23 07:50:54 crc kubenswrapper[4988]: I1123 07:50:54.562737 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:54 crc kubenswrapper[4988]: I1123 07:50:54.563377 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:50:55 crc kubenswrapper[4988]: I1123 07:50:55.603358 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzxnx" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="registry-server" probeResult="failure" output=< Nov 23 07:50:55 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 07:50:55 crc kubenswrapper[4988]: > Nov 23 07:50:59 crc kubenswrapper[4988]: I1123 07:50:59.497018 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:50:59 crc kubenswrapper[4988]: E1123 07:50:59.497978 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:51:04 crc kubenswrapper[4988]: I1123 07:51:04.638602 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:51:04 crc kubenswrapper[4988]: I1123 07:51:04.666355 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzxnx" podStartSLOduration=18.20657714 podStartE2EDuration="20.666336422s" podCreationTimestamp="2025-11-23 07:50:44 +0000 UTC" firstStartedPulling="2025-11-23 07:50:45.688065045 +0000 UTC m=+3897.996577848" lastFinishedPulling="2025-11-23 07:50:48.147824337 +0000 UTC m=+3900.456337130" observedRunningTime="2025-11-23 07:50:48.763919894 +0000 UTC m=+3901.072432727" watchObservedRunningTime="2025-11-23 07:51:04.666336422 +0000 UTC m=+3916.974849195" Nov 23 07:51:04 crc kubenswrapper[4988]: I1123 07:51:04.726945 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:51:04 crc kubenswrapper[4988]: I1123 07:51:04.883689 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:51:05 crc kubenswrapper[4988]: I1123 07:51:05.876229 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzxnx" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="registry-server" containerID="cri-o://960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9" gracePeriod=2 Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.367267 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.396322 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqd8t\" (UniqueName: \"kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t\") pod \"eda20535-1abd-4978-8be3-845939d4288f\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.396489 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content\") pod \"eda20535-1abd-4978-8be3-845939d4288f\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.396544 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities\") pod \"eda20535-1abd-4978-8be3-845939d4288f\" (UID: \"eda20535-1abd-4978-8be3-845939d4288f\") " Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.397579 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities" (OuterVolumeSpecName: "utilities") pod "eda20535-1abd-4978-8be3-845939d4288f" (UID: "eda20535-1abd-4978-8be3-845939d4288f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.402771 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t" (OuterVolumeSpecName: "kube-api-access-wqd8t") pod "eda20535-1abd-4978-8be3-845939d4288f" (UID: "eda20535-1abd-4978-8be3-845939d4288f"). InnerVolumeSpecName "kube-api-access-wqd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.497511 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqd8t\" (UniqueName: \"kubernetes.io/projected/eda20535-1abd-4978-8be3-845939d4288f-kube-api-access-wqd8t\") on node \"crc\" DevicePath \"\"" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.497545 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.521323 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda20535-1abd-4978-8be3-845939d4288f" (UID: "eda20535-1abd-4978-8be3-845939d4288f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.598850 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda20535-1abd-4978-8be3-845939d4288f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.889665 4988 generic.go:334] "Generic (PLEG): container finished" podID="eda20535-1abd-4978-8be3-845939d4288f" containerID="960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9" exitCode=0 Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.889728 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerDied","Data":"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9"} Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.889767 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzxnx" event={"ID":"eda20535-1abd-4978-8be3-845939d4288f","Type":"ContainerDied","Data":"eccd29533d31d281fd54917836b6a5f7be7a654a33b38987f049ebfd54716ece"} Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.889828 4988 scope.go:117] "RemoveContainer" containerID="960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.889982 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzxnx" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.925317 4988 scope.go:117] "RemoveContainer" containerID="f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.934887 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.950153 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzxnx"] Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.966872 4988 scope.go:117] "RemoveContainer" containerID="71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.993002 4988 scope.go:117] "RemoveContainer" containerID="960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9" Nov 23 07:51:06 crc kubenswrapper[4988]: E1123 07:51:06.993549 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9\": container with ID starting with 960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9 not found: ID does not exist" containerID="960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.993583 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9"} err="failed to get container status \"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9\": rpc error: code = NotFound desc = could not find container \"960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9\": container with ID starting with 960ddc3fc3061caae4c0ff8cc36172df92ccdc2ce7a3e2e0fa0605ff3b60faf9 not found: ID does not exist" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.993608 4988 scope.go:117] "RemoveContainer" containerID="f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75" Nov 23 07:51:06 crc kubenswrapper[4988]: E1123 07:51:06.994052 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75\": container with ID starting with f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75 not found: ID does not exist" containerID="f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.994081 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75"} err="failed to get container status \"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75\": rpc error: code = NotFound desc = could not find container \"f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75\": container with ID starting with f0d47da8f337a18284c2e47fb6d21833e81e91778df8cd6390aca450bb626e75 not found: ID does not exist" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.994098 4988 scope.go:117] "RemoveContainer" containerID="71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485" Nov 23 07:51:06 crc kubenswrapper[4988]: E1123 07:51:06.994472 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485\": container with ID starting with 71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485 not found: ID does not exist" containerID="71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485" Nov 23 07:51:06 crc kubenswrapper[4988]: I1123 07:51:06.994495 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485"} err="failed to get container status \"71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485\": rpc error: code = NotFound desc = could not find container \"71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485\": container with ID starting with 71f289ae42ea8bb86e182105177c5e153320cf0c282e4e614fe9d73b2074f485 not found: ID does not exist" Nov 23 07:51:08 crc kubenswrapper[4988]: I1123 07:51:08.510725 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda20535-1abd-4978-8be3-845939d4288f" path="/var/lib/kubelet/pods/eda20535-1abd-4978-8be3-845939d4288f/volumes" Nov 23 07:51:13 crc kubenswrapper[4988]: I1123 07:51:13.495981 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:51:13 crc kubenswrapper[4988]: E1123 07:51:13.496907 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:51:26 crc kubenswrapper[4988]: I1123 07:51:26.498749 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:51:26 crc kubenswrapper[4988]: E1123 07:51:26.500422 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:51:37 crc kubenswrapper[4988]: I1123 07:51:37.496915 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:51:37 crc kubenswrapper[4988]: E1123 07:51:37.500522 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:51:49 crc kubenswrapper[4988]: I1123 07:51:49.496233 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:51:49 crc kubenswrapper[4988]: E1123 07:51:49.496843 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:52:04 crc kubenswrapper[4988]: I1123 07:52:04.496040 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:52:04 crc kubenswrapper[4988]: E1123 07:52:04.496936 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:52:17 crc kubenswrapper[4988]: I1123 07:52:17.496619 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:52:17 crc kubenswrapper[4988]: E1123 07:52:17.497909 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:52:31 crc kubenswrapper[4988]: I1123 07:52:31.496495 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:52:31 crc kubenswrapper[4988]: E1123 07:52:31.497689 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:52:44 crc kubenswrapper[4988]: I1123 07:52:44.496777 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:52:44 crc kubenswrapper[4988]: E1123 07:52:44.498110 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:52:58 crc kubenswrapper[4988]: I1123 07:52:58.507146 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:52:58 crc kubenswrapper[4988]: I1123 07:52:58.947719 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea"} Nov 23 07:55:21 crc kubenswrapper[4988]: I1123 07:55:21.672028 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:55:21 crc kubenswrapper[4988]: I1123 07:55:21.672526 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.351699 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:55:51 crc kubenswrapper[4988]: E1123 07:55:51.353107 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="extract-utilities" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.353132 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="extract-utilities" Nov 23 07:55:51 crc kubenswrapper[4988]: E1123 07:55:51.353181 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="registry-server" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.353221 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="registry-server" Nov 23 07:55:51 crc kubenswrapper[4988]: E1123 07:55:51.353249 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="extract-content" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.353262 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="extract-content" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.353580 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda20535-1abd-4978-8be3-845939d4288f" containerName="registry-server" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.355372 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.371942 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.501024 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.501083 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.501145 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qqj\" (UniqueName: \"kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.603675 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.604238 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.604332 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.604463 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qqj\" (UniqueName: \"kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.604666 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.635121 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qqj\" (UniqueName: \"kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj\") pod \"community-operators-pdlmr\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.672260 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.672328 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:55:51 crc kubenswrapper[4988]: I1123 07:55:51.697026 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:55:52 crc kubenswrapper[4988]: I1123 07:55:52.208360 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:55:52 crc kubenswrapper[4988]: I1123 07:55:52.560558 4988 generic.go:334] "Generic (PLEG): container finished" podID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerID="404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc" exitCode=0 Nov 23 07:55:52 crc kubenswrapper[4988]: I1123 07:55:52.560617 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerDied","Data":"404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc"} Nov 23 07:55:52 crc kubenswrapper[4988]: I1123 07:55:52.560875 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerStarted","Data":"34ef7e64e4c22175940d5147e8ef4d59c7614b74242cf04c42058c8b303eea60"} Nov 23 07:55:52 crc kubenswrapper[4988]: I1123 07:55:52.562301 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:55:53 crc kubenswrapper[4988]: I1123 07:55:53.574249 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerStarted","Data":"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364"} Nov 23 07:55:54 crc kubenswrapper[4988]: I1123 07:55:54.585904 4988 generic.go:334] "Generic (PLEG): container finished" podID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerID="0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364" exitCode=0 Nov 23 07:55:54 crc kubenswrapper[4988]: I1123 07:55:54.585964 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerDied","Data":"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364"} Nov 23 07:55:55 crc kubenswrapper[4988]: I1123 07:55:55.598948 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerStarted","Data":"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d"} Nov 23 07:55:55 crc kubenswrapper[4988]: I1123 07:55:55.628177 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdlmr" podStartSLOduration=2.204624078 podStartE2EDuration="4.628153749s" podCreationTimestamp="2025-11-23 07:55:51 +0000 UTC" firstStartedPulling="2025-11-23 07:55:52.561992795 +0000 UTC m=+4204.870505558" lastFinishedPulling="2025-11-23 07:55:54.985522426 +0000 UTC m=+4207.294035229" observedRunningTime="2025-11-23 07:55:55.624688024 +0000 UTC m=+4207.933200797" watchObservedRunningTime="2025-11-23 07:55:55.628153749 +0000 UTC m=+4207.936666552" Nov 23 07:56:01 crc kubenswrapper[4988]: I1123 07:56:01.698289 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:01 crc kubenswrapper[4988]: I1123 07:56:01.698809 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:01 crc kubenswrapper[4988]: I1123 07:56:01.736796 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:02 crc kubenswrapper[4988]: I1123 07:56:02.711440 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:02 crc kubenswrapper[4988]: I1123 07:56:02.775595 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:56:04 crc kubenswrapper[4988]: I1123 07:56:04.675108 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdlmr" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="registry-server" containerID="cri-o://bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d" gracePeriod=2 Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.147252 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.240310 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content\") pod \"729281ba-d867-4a93-8e39-eaed9ba482e6\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.240463 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities\") pod \"729281ba-d867-4a93-8e39-eaed9ba482e6\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.240513 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6qqj\" (UniqueName: \"kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj\") pod \"729281ba-d867-4a93-8e39-eaed9ba482e6\" (UID: \"729281ba-d867-4a93-8e39-eaed9ba482e6\") " Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.241308 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities" (OuterVolumeSpecName: "utilities") pod "729281ba-d867-4a93-8e39-eaed9ba482e6" (UID: "729281ba-d867-4a93-8e39-eaed9ba482e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.246253 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj" (OuterVolumeSpecName: "kube-api-access-r6qqj") pod "729281ba-d867-4a93-8e39-eaed9ba482e6" (UID: "729281ba-d867-4a93-8e39-eaed9ba482e6"). InnerVolumeSpecName "kube-api-access-r6qqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.319915 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "729281ba-d867-4a93-8e39-eaed9ba482e6" (UID: "729281ba-d867-4a93-8e39-eaed9ba482e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.342579 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.342622 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729281ba-d867-4a93-8e39-eaed9ba482e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.342643 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6qqj\" (UniqueName: \"kubernetes.io/projected/729281ba-d867-4a93-8e39-eaed9ba482e6-kube-api-access-r6qqj\") on node \"crc\" DevicePath \"\"" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.693574 4988 generic.go:334] "Generic (PLEG): container finished" podID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerID="bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d" exitCode=0 Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.693651 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerDied","Data":"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d"} Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.693734 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdlmr" event={"ID":"729281ba-d867-4a93-8e39-eaed9ba482e6","Type":"ContainerDied","Data":"34ef7e64e4c22175940d5147e8ef4d59c7614b74242cf04c42058c8b303eea60"} Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.693766 4988 scope.go:117] "RemoveContainer" containerID="bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.693671 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdlmr" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.725516 4988 scope.go:117] "RemoveContainer" containerID="0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.742114 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.747048 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdlmr"] Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.762996 4988 scope.go:117] "RemoveContainer" containerID="404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.799008 4988 scope.go:117] "RemoveContainer" containerID="bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d" Nov 23 07:56:05 crc kubenswrapper[4988]: E1123 07:56:05.799622 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d\": container with ID starting with bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d not found: ID does not exist" containerID="bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.799731 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d"} err="failed to get container status \"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d\": rpc error: code = NotFound desc = could not find container \"bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d\": container with ID starting with bd593760695f20717d0d54deefa7716012807d9ac3235f8a3e4a6ba7e1cefd2d not found: ID does not exist" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.799815 4988 scope.go:117] "RemoveContainer" containerID="0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364" Nov 23 07:56:05 crc kubenswrapper[4988]: E1123 07:56:05.800293 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364\": container with ID starting with 0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364 not found: ID does not exist" containerID="0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.800334 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364"} err="failed to get container status \"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364\": rpc error: code = NotFound desc = could not find container \"0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364\": container with ID starting with 0bdc5cb63736320f2cf5d326f20990e30e939c09f8bc6d706faf2f2bca102364 not found: ID does not exist" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.800364 4988 scope.go:117] "RemoveContainer" containerID="404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc" Nov 23 07:56:05 crc kubenswrapper[4988]: E1123 07:56:05.800760 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc\": container with ID starting with 404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc not found: ID does not exist" containerID="404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc" Nov 23 07:56:05 crc kubenswrapper[4988]: I1123 07:56:05.800815 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc"} err="failed to get container status \"404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc\": rpc error: code = NotFound desc = could not find container \"404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc\": container with ID starting with 404e0f1e4f99cbd41e3760e548c325b0edb06da2c71c47985f7c13c8391255dc not found: ID does not exist" Nov 23 07:56:06 crc kubenswrapper[4988]: I1123 07:56:06.506989 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" path="/var/lib/kubelet/pods/729281ba-d867-4a93-8e39-eaed9ba482e6/volumes" Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.672471 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.673348 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.673430 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.674667 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.674784 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea" gracePeriod=600 Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.846037 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea" exitCode=0 Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.846155 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea"} Nov 23 07:56:21 crc kubenswrapper[4988]: I1123 07:56:21.846566 4988 scope.go:117] "RemoveContainer" containerID="3999c75749767e2d97b34fc555bc132620cc678367c3a38db2622991adc6bb98" Nov 23 07:56:22 crc kubenswrapper[4988]: I1123 07:56:22.857985 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76"} Nov 23 07:58:51 crc kubenswrapper[4988]: I1123 07:58:51.672070 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:58:51 crc kubenswrapper[4988]: I1123 07:58:51.672734 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:59:21 crc kubenswrapper[4988]: I1123 07:59:21.672814 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:59:21 crc kubenswrapper[4988]: I1123 07:59:21.674334 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.655532 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 07:59:49 crc kubenswrapper[4988]: E1123 07:59:49.658644 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="registry-server" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.658770 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="registry-server" Nov 23 07:59:49 crc kubenswrapper[4988]: E1123 07:59:49.658860 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="extract-utilities" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.658938 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="extract-utilities" Nov 23 07:59:49 crc kubenswrapper[4988]: E1123 07:59:49.659070 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="extract-content" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.659154 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="extract-content" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.659450 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="729281ba-d867-4a93-8e39-eaed9ba482e6" containerName="registry-server" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.660855 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.676598 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.857341 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.857724 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.857811 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.958845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.959341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.959542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.960534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.961117 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:49 crc kubenswrapper[4988]: I1123 07:59:49.987828 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22\") pod \"certified-operators-mb5q4\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:50 crc kubenswrapper[4988]: I1123 07:59:50.026261 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 07:59:50 crc kubenswrapper[4988]: I1123 07:59:50.510942 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 07:59:50 crc kubenswrapper[4988]: I1123 07:59:50.804650 4988 generic.go:334] "Generic (PLEG): container finished" podID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerID="19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781" exitCode=0 Nov 23 07:59:50 crc kubenswrapper[4988]: I1123 07:59:50.804699 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerDied","Data":"19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781"} Nov 23 07:59:50 crc kubenswrapper[4988]: I1123 07:59:50.804728 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerStarted","Data":"3c7a82173058c857d7a3c68c0b2f765ee0eb04d731e54b2292f8e6cc34e526cf"} Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.672267 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.672581 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.672632 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.673465 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.673524 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" gracePeriod=600 Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.813000 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" exitCode=0 Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.813099 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76"} Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.813332 4988 scope.go:117] "RemoveContainer" containerID="7b3b6208b1c3b654dc592758da8686e5a63a8cb8b0d2187dd565ad644f13f6ea" Nov 23 07:59:51 crc kubenswrapper[4988]: E1123 07:59:51.814745 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.816182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerStarted","Data":"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1"} Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.853063 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.856276 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.859249 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.891349 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.891412 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htc87\" (UniqueName: \"kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.891433 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.992270 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htc87\" (UniqueName: \"kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.992309 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.992368 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.992777 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:51 crc kubenswrapper[4988]: I1123 07:59:51.992874 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.015552 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htc87\" (UniqueName: \"kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87\") pod \"redhat-marketplace-zvkgx\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.171526 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.576019 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.823246 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerID="d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851" exitCode=0 Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.823351 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerDied","Data":"d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851"} Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.823626 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerStarted","Data":"42208b9b0048d681c7d5fcdf74da0aaaa45600933925c694b147dd1cb20ca65e"} Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.826383 4988 generic.go:334] "Generic (PLEG): container finished" podID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerID="57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1" exitCode=0 Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.826451 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerDied","Data":"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1"} Nov 23 07:59:52 crc kubenswrapper[4988]: I1123 07:59:52.828526 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 07:59:52 crc kubenswrapper[4988]: E1123 07:59:52.828743 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 07:59:53 crc kubenswrapper[4988]: I1123 07:59:53.839297 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerID="fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735" exitCode=0 Nov 23 07:59:53 crc kubenswrapper[4988]: I1123 07:59:53.839353 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerDied","Data":"fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735"} Nov 23 07:59:53 crc kubenswrapper[4988]: I1123 07:59:53.843692 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerStarted","Data":"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323"} Nov 23 07:59:53 crc kubenswrapper[4988]: I1123 07:59:53.896149 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mb5q4" podStartSLOduration=2.4754827329999998 podStartE2EDuration="4.896128444s" podCreationTimestamp="2025-11-23 07:59:49 +0000 UTC" firstStartedPulling="2025-11-23 07:59:50.8082714 +0000 UTC m=+4443.116784173" lastFinishedPulling="2025-11-23 07:59:53.228917081 +0000 UTC m=+4445.537429884" observedRunningTime="2025-11-23 07:59:53.887930744 +0000 UTC m=+4446.196443517" watchObservedRunningTime="2025-11-23 07:59:53.896128444 +0000 UTC m=+4446.204641217" Nov 23 07:59:54 crc kubenswrapper[4988]: I1123 07:59:54.854561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerStarted","Data":"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e"} Nov 23 07:59:54 crc kubenswrapper[4988]: I1123 07:59:54.877347 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvkgx" podStartSLOduration=2.286826447 podStartE2EDuration="3.877332051s" podCreationTimestamp="2025-11-23 07:59:51 +0000 UTC" firstStartedPulling="2025-11-23 07:59:52.824855248 +0000 UTC m=+4445.133368011" lastFinishedPulling="2025-11-23 07:59:54.415360852 +0000 UTC m=+4446.723873615" observedRunningTime="2025-11-23 07:59:54.873761163 +0000 UTC m=+4447.182273966" watchObservedRunningTime="2025-11-23 07:59:54.877332051 +0000 UTC m=+4447.185844814" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.026423 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.027089 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.072980 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.155803 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr"] Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.156880 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.159427 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.160376 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.179894 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr"] Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.320828 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.320991 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgwp\" (UniqueName: \"kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.321169 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.422632 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgwp\" (UniqueName: \"kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.422725 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.422786 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.424425 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.437868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.454640 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgwp\" (UniqueName: \"kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp\") pod \"collect-profiles-29398080-9frzr\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.480872 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.906583 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr"] Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.947865 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:00 crc kubenswrapper[4988]: I1123 08:00:00.990731 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 08:00:01 crc kubenswrapper[4988]: I1123 08:00:01.919656 4988 generic.go:334] "Generic (PLEG): container finished" podID="909f8f22-cb33-4129-a454-9b609e93c248" containerID="73d45c0aeedc1159ee98e99105235e399fe8b7dba7b80ef73ee46ef91f998d65" exitCode=0 Nov 23 08:00:01 crc kubenswrapper[4988]: I1123 08:00:01.920339 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" event={"ID":"909f8f22-cb33-4129-a454-9b609e93c248","Type":"ContainerDied","Data":"73d45c0aeedc1159ee98e99105235e399fe8b7dba7b80ef73ee46ef91f998d65"} Nov 23 08:00:01 crc kubenswrapper[4988]: I1123 08:00:01.920413 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" event={"ID":"909f8f22-cb33-4129-a454-9b609e93c248","Type":"ContainerStarted","Data":"18835970aef45546c40288ae159fc48ff9c5e09cab1c99452ac11fca5d95c849"} Nov 23 08:00:02 crc kubenswrapper[4988]: I1123 08:00:02.172175 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:02 crc kubenswrapper[4988]: I1123 08:00:02.172253 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:02 crc kubenswrapper[4988]: I1123 08:00:02.213904 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:02 crc kubenswrapper[4988]: I1123 08:00:02.929978 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mb5q4" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="registry-server" containerID="cri-o://1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323" gracePeriod=2 Nov 23 08:00:02 crc kubenswrapper[4988]: I1123 08:00:02.979686 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.220763 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.289993 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgwp\" (UniqueName: \"kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp\") pod \"909f8f22-cb33-4129-a454-9b609e93c248\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.290039 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume\") pod \"909f8f22-cb33-4129-a454-9b609e93c248\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.290153 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume\") pod \"909f8f22-cb33-4129-a454-9b609e93c248\" (UID: \"909f8f22-cb33-4129-a454-9b609e93c248\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.291091 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume" (OuterVolumeSpecName: "config-volume") pod "909f8f22-cb33-4129-a454-9b609e93c248" (UID: "909f8f22-cb33-4129-a454-9b609e93c248"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.295820 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp" (OuterVolumeSpecName: "kube-api-access-qlgwp") pod "909f8f22-cb33-4129-a454-9b609e93c248" (UID: "909f8f22-cb33-4129-a454-9b609e93c248"). InnerVolumeSpecName "kube-api-access-qlgwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.296107 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "909f8f22-cb33-4129-a454-9b609e93c248" (UID: "909f8f22-cb33-4129-a454-9b609e93c248"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.357580 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities\") pod \"396dfb46-8640-4419-b5d0-c75c00fb6465\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391065 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22\") pod \"396dfb46-8640-4419-b5d0-c75c00fb6465\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391205 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content\") pod \"396dfb46-8640-4419-b5d0-c75c00fb6465\" (UID: \"396dfb46-8640-4419-b5d0-c75c00fb6465\") " Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391462 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/909f8f22-cb33-4129-a454-9b609e93c248-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391481 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgwp\" (UniqueName: \"kubernetes.io/projected/909f8f22-cb33-4129-a454-9b609e93c248-kube-api-access-qlgwp\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.391492 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/909f8f22-cb33-4129-a454-9b609e93c248-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.395175 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities" (OuterVolumeSpecName: "utilities") pod "396dfb46-8640-4419-b5d0-c75c00fb6465" (UID: "396dfb46-8640-4419-b5d0-c75c00fb6465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.398844 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22" (OuterVolumeSpecName: "kube-api-access-6vf22") pod "396dfb46-8640-4419-b5d0-c75c00fb6465" (UID: "396dfb46-8640-4419-b5d0-c75c00fb6465"). InnerVolumeSpecName "kube-api-access-6vf22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.439411 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "396dfb46-8640-4419-b5d0-c75c00fb6465" (UID: "396dfb46-8640-4419-b5d0-c75c00fb6465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.493100 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.493153 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vf22\" (UniqueName: \"kubernetes.io/projected/396dfb46-8640-4419-b5d0-c75c00fb6465-kube-api-access-6vf22\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.493172 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396dfb46-8640-4419-b5d0-c75c00fb6465-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.904559 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.938379 4988 generic.go:334] "Generic (PLEG): container finished" podID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerID="1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323" exitCode=0 Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.938464 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb5q4" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.938499 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerDied","Data":"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323"} Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.938542 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb5q4" event={"ID":"396dfb46-8640-4419-b5d0-c75c00fb6465","Type":"ContainerDied","Data":"3c7a82173058c857d7a3c68c0b2f765ee0eb04d731e54b2292f8e6cc34e526cf"} Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.938566 4988 scope.go:117] "RemoveContainer" containerID="1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.942088 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.942491 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr" event={"ID":"909f8f22-cb33-4129-a454-9b609e93c248","Type":"ContainerDied","Data":"18835970aef45546c40288ae159fc48ff9c5e09cab1c99452ac11fca5d95c849"} Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.942564 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18835970aef45546c40288ae159fc48ff9c5e09cab1c99452ac11fca5d95c849" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.965275 4988 scope.go:117] "RemoveContainer" containerID="57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1" Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.981176 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.988314 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mb5q4"] Nov 23 08:00:03 crc kubenswrapper[4988]: I1123 08:00:03.997974 4988 scope.go:117] "RemoveContainer" containerID="19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.015825 4988 scope.go:117] "RemoveContainer" containerID="1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323" Nov 23 08:00:04 crc kubenswrapper[4988]: E1123 08:00:04.016642 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323\": container with ID starting with 1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323 not found: ID does not exist" containerID="1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.016790 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323"} err="failed to get container status \"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323\": rpc error: code = NotFound desc = could not find container \"1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323\": container with ID starting with 1999d2ce6b6b44249fc7d4b7c05f918f7168fe51810f48a994e5f9bdf83ff323 not found: ID does not exist" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.016895 4988 scope.go:117] "RemoveContainer" containerID="57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1" Nov 23 08:00:04 crc kubenswrapper[4988]: E1123 08:00:04.017689 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1\": container with ID starting with 57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1 not found: ID does not exist" containerID="57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.017765 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1"} err="failed to get container status \"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1\": rpc error: code = NotFound desc = could not find container \"57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1\": container with ID starting with 57a2061de2b0347902f8967c43980be7b50e3f65513f17baa94b31c9764f60b1 not found: ID does not exist" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.017808 4988 scope.go:117] "RemoveContainer" containerID="19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781" Nov 23 08:00:04 crc kubenswrapper[4988]: E1123 08:00:04.018651 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781\": container with ID starting with 19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781 not found: ID does not exist" containerID="19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.018682 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781"} err="failed to get container status \"19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781\": rpc error: code = NotFound desc = could not find container \"19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781\": container with ID starting with 19a7dcdc37ef57dc8fc09936a21f27d96877331cfef7462f81a4809deb126781 not found: ID does not exist" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.292077 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm"] Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.300910 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-lmbqm"] Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.508937 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" path="/var/lib/kubelet/pods/396dfb46-8640-4419-b5d0-c75c00fb6465/volumes" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.510505 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b378727f-a7df-4930-b38a-e77353e67097" path="/var/lib/kubelet/pods/b378727f-a7df-4930-b38a-e77353e67097/volumes" Nov 23 08:00:04 crc kubenswrapper[4988]: I1123 08:00:04.951045 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvkgx" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="registry-server" containerID="cri-o://30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e" gracePeriod=2 Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.340685 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.419929 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content\") pod \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.419967 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities\") pod \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.420091 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htc87\" (UniqueName: \"kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87\") pod \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\" (UID: \"4a980dd3-5164-4f3f-b74d-1b16f0fc3843\") " Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.421607 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities" (OuterVolumeSpecName: "utilities") pod "4a980dd3-5164-4f3f-b74d-1b16f0fc3843" (UID: "4a980dd3-5164-4f3f-b74d-1b16f0fc3843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.424287 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87" (OuterVolumeSpecName: "kube-api-access-htc87") pod "4a980dd3-5164-4f3f-b74d-1b16f0fc3843" (UID: "4a980dd3-5164-4f3f-b74d-1b16f0fc3843"). InnerVolumeSpecName "kube-api-access-htc87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.436513 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a980dd3-5164-4f3f-b74d-1b16f0fc3843" (UID: "4a980dd3-5164-4f3f-b74d-1b16f0fc3843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.522044 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htc87\" (UniqueName: \"kubernetes.io/projected/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-kube-api-access-htc87\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.522472 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.522628 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a980dd3-5164-4f3f-b74d-1b16f0fc3843-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.963385 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerID="30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e" exitCode=0 Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.963508 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerDied","Data":"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e"} Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.963551 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvkgx" event={"ID":"4a980dd3-5164-4f3f-b74d-1b16f0fc3843","Type":"ContainerDied","Data":"42208b9b0048d681c7d5fcdf74da0aaaa45600933925c694b147dd1cb20ca65e"} Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.963580 4988 scope.go:117] "RemoveContainer" containerID="30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e" Nov 23 08:00:05 crc kubenswrapper[4988]: I1123 08:00:05.963661 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvkgx" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.000542 4988 scope.go:117] "RemoveContainer" containerID="fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.068309 4988 scope.go:117] "RemoveContainer" containerID="d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.072326 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.077593 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvkgx"] Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.100808 4988 scope.go:117] "RemoveContainer" containerID="30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e" Nov 23 08:00:06 crc kubenswrapper[4988]: E1123 08:00:06.101565 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e\": container with ID starting with 30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e not found: ID does not exist" containerID="30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.101596 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e"} err="failed to get container status \"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e\": rpc error: code = NotFound desc = could not find container \"30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e\": container with ID starting with 30993fc7679f2c7de6ce9007ec807c1c76beaf13bd454227e7486fb486e8050e not found: ID does not exist" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.101617 4988 scope.go:117] "RemoveContainer" containerID="fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735" Nov 23 08:00:06 crc kubenswrapper[4988]: E1123 08:00:06.101963 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735\": container with ID starting with fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735 not found: ID does not exist" containerID="fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.102011 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735"} err="failed to get container status \"fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735\": rpc error: code = NotFound desc = could not find container \"fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735\": container with ID starting with fffd5e9f154a791a20a08cc47ff4d9be65a56b249bbf5db503ea187d54594735 not found: ID does not exist" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.102041 4988 scope.go:117] "RemoveContainer" containerID="d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851" Nov 23 08:00:06 crc kubenswrapper[4988]: E1123 08:00:06.102405 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851\": container with ID starting with d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851 not found: ID does not exist" containerID="d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.102426 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851"} err="failed to get container status \"d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851\": rpc error: code = NotFound desc = could not find container \"d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851\": container with ID starting with d08fcc5f1fc782a4ad88d70cc8ba75de25865ed5740fdb8191a6db9770921851 not found: ID does not exist" Nov 23 08:00:06 crc kubenswrapper[4988]: I1123 08:00:06.508301 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" path="/var/lib/kubelet/pods/4a980dd3-5164-4f3f-b74d-1b16f0fc3843/volumes" Nov 23 08:00:07 crc kubenswrapper[4988]: I1123 08:00:07.495906 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:00:07 crc kubenswrapper[4988]: E1123 08:00:07.496561 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:00:18 crc kubenswrapper[4988]: I1123 08:00:18.505513 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:00:18 crc kubenswrapper[4988]: E1123 08:00:18.507232 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:00:31 crc kubenswrapper[4988]: I1123 08:00:31.495861 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:00:31 crc kubenswrapper[4988]: E1123 08:00:31.496980 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:00:44 crc kubenswrapper[4988]: I1123 08:00:44.496987 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:00:44 crc kubenswrapper[4988]: E1123 08:00:44.498231 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:00:55 crc kubenswrapper[4988]: I1123 08:00:55.496139 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:00:55 crc kubenswrapper[4988]: E1123 08:00:55.497159 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:00:56 crc kubenswrapper[4988]: I1123 08:00:56.740435 4988 scope.go:117] "RemoveContainer" containerID="af1d2ab79f46c8e3b9964b808fdcaae9ad163d5b1806259e11e3024ad1fa999a" Nov 23 08:01:06 crc kubenswrapper[4988]: I1123 08:01:06.496094 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:01:06 crc kubenswrapper[4988]: E1123 08:01:06.496965 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:01:17 crc kubenswrapper[4988]: I1123 08:01:17.496437 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:01:17 crc kubenswrapper[4988]: E1123 08:01:17.497498 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:01:28 crc kubenswrapper[4988]: I1123 08:01:28.504572 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:01:28 crc kubenswrapper[4988]: E1123 08:01:28.505878 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:01:43 crc kubenswrapper[4988]: I1123 08:01:43.496745 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:01:43 crc kubenswrapper[4988]: E1123 08:01:43.498783 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:01:57 crc kubenswrapper[4988]: I1123 08:01:57.496984 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:01:57 crc kubenswrapper[4988]: E1123 08:01:57.498006 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.592405 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593349 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="extract-utilities" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593373 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="extract-utilities" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593397 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593410 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593440 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="extract-content" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593453 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="extract-content" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593475 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="extract-utilities" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593487 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="extract-utilities" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593517 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="909f8f22-cb33-4129-a454-9b609e93c248" containerName="collect-profiles" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593529 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="909f8f22-cb33-4129-a454-9b609e93c248" containerName="collect-profiles" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593556 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593568 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: E1123 08:02:04.593596 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="extract-content" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593609 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="extract-content" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593867 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a980dd3-5164-4f3f-b74d-1b16f0fc3843" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593907 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="909f8f22-cb33-4129-a454-9b609e93c248" containerName="collect-profiles" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.593943 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="396dfb46-8640-4419-b5d0-c75c00fb6465" containerName="registry-server" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.595696 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.620940 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.715746 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.716216 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.716277 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxxj\" (UniqueName: \"kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.817064 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.817136 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.817186 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxxj\" (UniqueName: \"kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.817646 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.817826 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.842912 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxxj\" (UniqueName: \"kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj\") pod \"redhat-operators-hrxgp\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:04 crc kubenswrapper[4988]: I1123 08:02:04.930272 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:05 crc kubenswrapper[4988]: I1123 08:02:05.147060 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:06 crc kubenswrapper[4988]: I1123 08:02:06.103989 4988 generic.go:334] "Generic (PLEG): container finished" podID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerID="9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8" exitCode=0 Nov 23 08:02:06 crc kubenswrapper[4988]: I1123 08:02:06.104084 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerDied","Data":"9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8"} Nov 23 08:02:06 crc kubenswrapper[4988]: I1123 08:02:06.104574 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerStarted","Data":"100a75e3eab06bda758c750c2a486e25261a4ff9edde4d9737ff658ce07c0340"} Nov 23 08:02:06 crc kubenswrapper[4988]: I1123 08:02:06.107753 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:02:07 crc kubenswrapper[4988]: I1123 08:02:07.117029 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerStarted","Data":"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0"} Nov 23 08:02:08 crc kubenswrapper[4988]: I1123 08:02:08.129318 4988 generic.go:334] "Generic (PLEG): container finished" podID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerID="077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0" exitCode=0 Nov 23 08:02:08 crc kubenswrapper[4988]: I1123 08:02:08.129461 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerDied","Data":"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0"} Nov 23 08:02:09 crc kubenswrapper[4988]: I1123 08:02:09.140882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerStarted","Data":"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99"} Nov 23 08:02:09 crc kubenswrapper[4988]: I1123 08:02:09.166763 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hrxgp" podStartSLOduration=2.732715638 podStartE2EDuration="5.166734046s" podCreationTimestamp="2025-11-23 08:02:04 +0000 UTC" firstStartedPulling="2025-11-23 08:02:06.106926975 +0000 UTC m=+4578.415439768" lastFinishedPulling="2025-11-23 08:02:08.540945373 +0000 UTC m=+4580.849458176" observedRunningTime="2025-11-23 08:02:09.162581954 +0000 UTC m=+4581.471094797" watchObservedRunningTime="2025-11-23 08:02:09.166734046 +0000 UTC m=+4581.475246849" Nov 23 08:02:09 crc kubenswrapper[4988]: I1123 08:02:09.496306 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:02:09 crc kubenswrapper[4988]: E1123 08:02:09.496558 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:02:14 crc kubenswrapper[4988]: I1123 08:02:14.931312 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:14 crc kubenswrapper[4988]: I1123 08:02:14.931970 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:15 crc kubenswrapper[4988]: I1123 08:02:15.981048 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hrxgp" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="registry-server" probeResult="failure" output=< Nov 23 08:02:15 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:02:15 crc kubenswrapper[4988]: > Nov 23 08:02:23 crc kubenswrapper[4988]: I1123 08:02:23.496382 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:02:23 crc kubenswrapper[4988]: E1123 08:02:23.497615 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:02:25 crc kubenswrapper[4988]: I1123 08:02:25.016647 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:25 crc kubenswrapper[4988]: I1123 08:02:25.097429 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:25 crc kubenswrapper[4988]: I1123 08:02:25.266890 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.300465 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hrxgp" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="registry-server" containerID="cri-o://e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99" gracePeriod=2 Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.771771 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.786707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities\") pod \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.786752 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxxj\" (UniqueName: \"kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj\") pod \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.788328 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities" (OuterVolumeSpecName: "utilities") pod "8e1ed1b1-f745-4031-acb0-9fb121f4ef54" (UID: "8e1ed1b1-f745-4031-acb0-9fb121f4ef54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.796675 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj" (OuterVolumeSpecName: "kube-api-access-xsxxj") pod "8e1ed1b1-f745-4031-acb0-9fb121f4ef54" (UID: "8e1ed1b1-f745-4031-acb0-9fb121f4ef54"). InnerVolumeSpecName "kube-api-access-xsxxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.887604 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content\") pod \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\" (UID: \"8e1ed1b1-f745-4031-acb0-9fb121f4ef54\") " Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.888175 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.888294 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxxj\" (UniqueName: \"kubernetes.io/projected/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-kube-api-access-xsxxj\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.989270 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e1ed1b1-f745-4031-acb0-9fb121f4ef54" (UID: "8e1ed1b1-f745-4031-acb0-9fb121f4ef54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:26 crc kubenswrapper[4988]: I1123 08:02:26.989490 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e1ed1b1-f745-4031-acb0-9fb121f4ef54-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.309640 4988 generic.go:334] "Generic (PLEG): container finished" podID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerID="e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99" exitCode=0 Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.309721 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerDied","Data":"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99"} Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.309758 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hrxgp" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.309773 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hrxgp" event={"ID":"8e1ed1b1-f745-4031-acb0-9fb121f4ef54","Type":"ContainerDied","Data":"100a75e3eab06bda758c750c2a486e25261a4ff9edde4d9737ff658ce07c0340"} Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.309816 4988 scope.go:117] "RemoveContainer" containerID="e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.332660 4988 scope.go:117] "RemoveContainer" containerID="077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.362961 4988 scope.go:117] "RemoveContainer" containerID="9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.363576 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.388848 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hrxgp"] Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.405090 4988 scope.go:117] "RemoveContainer" containerID="e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99" Nov 23 08:02:27 crc kubenswrapper[4988]: E1123 08:02:27.405587 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99\": container with ID starting with e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99 not found: ID does not exist" containerID="e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.405707 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99"} err="failed to get container status \"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99\": rpc error: code = NotFound desc = could not find container \"e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99\": container with ID starting with e351f53825d5c3306e9360c336db80cbb8d45e5c4828ae008df05878dd07fb99 not found: ID does not exist" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.405813 4988 scope.go:117] "RemoveContainer" containerID="077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0" Nov 23 08:02:27 crc kubenswrapper[4988]: E1123 08:02:27.406333 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0\": container with ID starting with 077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0 not found: ID does not exist" containerID="077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.406378 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0"} err="failed to get container status \"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0\": rpc error: code = NotFound desc = could not find container \"077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0\": container with ID starting with 077fa4771db45a1a6e46ced67b2e3b8a287a35fb8d597edb445390c0f25c82b0 not found: ID does not exist" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.406406 4988 scope.go:117] "RemoveContainer" containerID="9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8" Nov 23 08:02:27 crc kubenswrapper[4988]: E1123 08:02:27.406765 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8\": container with ID starting with 9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8 not found: ID does not exist" containerID="9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8" Nov 23 08:02:27 crc kubenswrapper[4988]: I1123 08:02:27.406822 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8"} err="failed to get container status \"9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8\": rpc error: code = NotFound desc = could not find container \"9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8\": container with ID starting with 9c0f799c008c5b54d4cf9bbbc527a8cfdf96fcc92455871e70c58c885b2113b8 not found: ID does not exist" Nov 23 08:02:28 crc kubenswrapper[4988]: I1123 08:02:28.512287 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" path="/var/lib/kubelet/pods/8e1ed1b1-f745-4031-acb0-9fb121f4ef54/volumes" Nov 23 08:02:37 crc kubenswrapper[4988]: I1123 08:02:37.496609 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:02:37 crc kubenswrapper[4988]: E1123 08:02:37.497741 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.491234 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-fl6wx"] Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.496921 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-fl6wx"] Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.633100 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-sc8wc"] Nov 23 08:02:39 crc kubenswrapper[4988]: E1123 08:02:39.633649 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="registry-server" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.633675 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="registry-server" Nov 23 08:02:39 crc kubenswrapper[4988]: E1123 08:02:39.633707 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="extract-content" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.633721 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="extract-content" Nov 23 08:02:39 crc kubenswrapper[4988]: E1123 08:02:39.633755 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="extract-utilities" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.633768 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="extract-utilities" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.635988 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1ed1b1-f745-4031-acb0-9fb121f4ef54" containerName="registry-server" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.637067 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.639289 4988 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6s8d2" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.639962 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.640310 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.640519 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.642903 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sc8wc"] Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.695685 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.696303 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nvm\" (UniqueName: \"kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.696488 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.798329 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.798469 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nvm\" (UniqueName: \"kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.798593 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.798822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.799804 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.843451 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nvm\" (UniqueName: \"kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm\") pod \"crc-storage-crc-sc8wc\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:39 crc kubenswrapper[4988]: I1123 08:02:39.973463 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:40 crc kubenswrapper[4988]: I1123 08:02:40.456751 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-sc8wc"] Nov 23 08:02:40 crc kubenswrapper[4988]: I1123 08:02:40.508303 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f82c43d-4d02-41dd-b05b-51735da4e160" path="/var/lib/kubelet/pods/7f82c43d-4d02-41dd-b05b-51735da4e160/volumes" Nov 23 08:02:41 crc kubenswrapper[4988]: I1123 08:02:41.462116 4988 generic.go:334] "Generic (PLEG): container finished" podID="f57a628c-7f0d-481e-94c7-b613a54199c9" containerID="045f0180421ef3eb3f64508a1234709b06108b933f8e03d475ab2dd23e809491" exitCode=0 Nov 23 08:02:41 crc kubenswrapper[4988]: I1123 08:02:41.462230 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sc8wc" event={"ID":"f57a628c-7f0d-481e-94c7-b613a54199c9","Type":"ContainerDied","Data":"045f0180421ef3eb3f64508a1234709b06108b933f8e03d475ab2dd23e809491"} Nov 23 08:02:41 crc kubenswrapper[4988]: I1123 08:02:41.462604 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sc8wc" event={"ID":"f57a628c-7f0d-481e-94c7-b613a54199c9","Type":"ContainerStarted","Data":"7d4d601beccadfcda85198b5964f0bebc558d8d8a8e87ac9056d66168c00bc04"} Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.814042 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.945522 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nvm\" (UniqueName: \"kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm\") pod \"f57a628c-7f0d-481e-94c7-b613a54199c9\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.945596 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage\") pod \"f57a628c-7f0d-481e-94c7-b613a54199c9\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.945743 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt\") pod \"f57a628c-7f0d-481e-94c7-b613a54199c9\" (UID: \"f57a628c-7f0d-481e-94c7-b613a54199c9\") " Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.945934 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "f57a628c-7f0d-481e-94c7-b613a54199c9" (UID: "f57a628c-7f0d-481e-94c7-b613a54199c9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.946313 4988 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/f57a628c-7f0d-481e-94c7-b613a54199c9-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.953241 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm" (OuterVolumeSpecName: "kube-api-access-r9nvm") pod "f57a628c-7f0d-481e-94c7-b613a54199c9" (UID: "f57a628c-7f0d-481e-94c7-b613a54199c9"). InnerVolumeSpecName "kube-api-access-r9nvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:02:42 crc kubenswrapper[4988]: I1123 08:02:42.966716 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "f57a628c-7f0d-481e-94c7-b613a54199c9" (UID: "f57a628c-7f0d-481e-94c7-b613a54199c9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:02:43 crc kubenswrapper[4988]: I1123 08:02:43.047375 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9nvm\" (UniqueName: \"kubernetes.io/projected/f57a628c-7f0d-481e-94c7-b613a54199c9-kube-api-access-r9nvm\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:43 crc kubenswrapper[4988]: I1123 08:02:43.047423 4988 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/f57a628c-7f0d-481e-94c7-b613a54199c9-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:43 crc kubenswrapper[4988]: I1123 08:02:43.481951 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-sc8wc" event={"ID":"f57a628c-7f0d-481e-94c7-b613a54199c9","Type":"ContainerDied","Data":"7d4d601beccadfcda85198b5964f0bebc558d8d8a8e87ac9056d66168c00bc04"} Nov 23 08:02:43 crc kubenswrapper[4988]: I1123 08:02:43.482345 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4d601beccadfcda85198b5964f0bebc558d8d8a8e87ac9056d66168c00bc04" Nov 23 08:02:43 crc kubenswrapper[4988]: I1123 08:02:43.482049 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-sc8wc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.037102 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-sc8wc"] Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.041185 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-sc8wc"] Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.166590 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tmrtc"] Nov 23 08:02:45 crc kubenswrapper[4988]: E1123 08:02:45.166970 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57a628c-7f0d-481e-94c7-b613a54199c9" containerName="storage" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.166995 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57a628c-7f0d-481e-94c7-b613a54199c9" containerName="storage" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.167216 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57a628c-7f0d-481e-94c7-b613a54199c9" containerName="storage" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.167760 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.170082 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.171040 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.171086 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.171292 4988 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-6s8d2" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.183911 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdrj\" (UniqueName: \"kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.184059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.184114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.184333 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tmrtc"] Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.284758 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qdrj\" (UniqueName: \"kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.285746 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.285317 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.286467 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.287225 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.300798 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qdrj\" (UniqueName: \"kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj\") pod \"crc-storage-crc-tmrtc\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.505867 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:45 crc kubenswrapper[4988]: I1123 08:02:45.927411 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tmrtc"] Nov 23 08:02:45 crc kubenswrapper[4988]: W1123 08:02:45.936738 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd36a5f70_b341_4e4e_ad7f_fe9c5bd2aaea.slice/crio-212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3 WatchSource:0}: Error finding container 212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3: Status 404 returned error can't find the container with id 212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3 Nov 23 08:02:46 crc kubenswrapper[4988]: I1123 08:02:46.518020 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57a628c-7f0d-481e-94c7-b613a54199c9" path="/var/lib/kubelet/pods/f57a628c-7f0d-481e-94c7-b613a54199c9/volumes" Nov 23 08:02:46 crc kubenswrapper[4988]: I1123 08:02:46.520608 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmrtc" event={"ID":"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea","Type":"ContainerStarted","Data":"212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3"} Nov 23 08:02:47 crc kubenswrapper[4988]: I1123 08:02:47.533131 4988 generic.go:334] "Generic (PLEG): container finished" podID="d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" containerID="d9d5067348e3fc432b913f52bc5fc5c7d3ac02bc66db73068fa6548cdd6e2f18" exitCode=0 Nov 23 08:02:47 crc kubenswrapper[4988]: I1123 08:02:47.533262 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmrtc" event={"ID":"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea","Type":"ContainerDied","Data":"d9d5067348e3fc432b913f52bc5fc5c7d3ac02bc66db73068fa6548cdd6e2f18"} Nov 23 08:02:48 crc kubenswrapper[4988]: I1123 08:02:48.896446 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.039334 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qdrj\" (UniqueName: \"kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj\") pod \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.039444 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt\") pod \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.039670 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage\") pod \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\" (UID: \"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea\") " Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.039663 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" (UID: "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.040148 4988 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.045113 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj" (OuterVolumeSpecName: "kube-api-access-6qdrj") pod "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" (UID: "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea"). InnerVolumeSpecName "kube-api-access-6qdrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.062812 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" (UID: "d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.141407 4988 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.141455 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qdrj\" (UniqueName: \"kubernetes.io/projected/d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea-kube-api-access-6qdrj\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.561736 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tmrtc" event={"ID":"d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea","Type":"ContainerDied","Data":"212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3"} Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.562523 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="212d3179b14fec5924a5cc24bfea6d13f2b898744d69dafdf6f4994f3876cde3" Nov 23 08:02:49 crc kubenswrapper[4988]: I1123 08:02:49.561844 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tmrtc" Nov 23 08:02:50 crc kubenswrapper[4988]: I1123 08:02:50.496278 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:02:50 crc kubenswrapper[4988]: E1123 08:02:50.496488 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:02:56 crc kubenswrapper[4988]: I1123 08:02:56.842670 4988 scope.go:117] "RemoveContainer" containerID="62484c6fbe8127266f8c378eb9ca0d9110f139143cb24b1ec18fce16679ad959" Nov 23 08:03:01 crc kubenswrapper[4988]: I1123 08:03:01.496988 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:03:01 crc kubenswrapper[4988]: E1123 08:03:01.497759 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:03:14 crc kubenswrapper[4988]: I1123 08:03:14.497272 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:03:14 crc kubenswrapper[4988]: E1123 08:03:14.498288 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:03:29 crc kubenswrapper[4988]: I1123 08:03:29.496428 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:03:29 crc kubenswrapper[4988]: E1123 08:03:29.497569 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:03:41 crc kubenswrapper[4988]: I1123 08:03:41.496711 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:03:41 crc kubenswrapper[4988]: E1123 08:03:41.497427 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:03:54 crc kubenswrapper[4988]: I1123 08:03:54.496847 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:03:54 crc kubenswrapper[4988]: E1123 08:03:54.497456 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:04:09 crc kubenswrapper[4988]: I1123 08:04:09.497339 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:04:09 crc kubenswrapper[4988]: E1123 08:04:09.498453 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:04:21 crc kubenswrapper[4988]: I1123 08:04:21.496044 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:04:21 crc kubenswrapper[4988]: E1123 08:04:21.497169 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:04:35 crc kubenswrapper[4988]: I1123 08:04:35.496478 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:04:35 crc kubenswrapper[4988]: E1123 08:04:35.498535 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:04:47 crc kubenswrapper[4988]: I1123 08:04:47.496283 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:04:47 crc kubenswrapper[4988]: E1123 08:04:47.497005 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.921118 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:04:51 crc kubenswrapper[4988]: E1123 08:04:51.921933 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" containerName="storage" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.921945 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" containerName="storage" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.922108 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36a5f70-b341-4e4e-ad7f-fe9c5bd2aaea" containerName="storage" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.923049 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.931443 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.932249 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.932402 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.932555 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p9p9h" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.932669 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.982940 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bf569cf-gvprw"] Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.990633 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:51 crc kubenswrapper[4988]: I1123 08:04:51.993411 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.030786 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gc2\" (UniqueName: \"kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.030867 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.057398 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bf569cf-gvprw"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.131895 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.131989 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.132032 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gc2\" (UniqueName: \"kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.132051 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgthb\" (UniqueName: \"kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.132069 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.132960 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.189250 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gc2\" (UniqueName: \"kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2\") pod \"dnsmasq-dns-79bbc675f-bkvfq\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.216408 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bf569cf-gvprw"] Nov 23 08:04:52 crc kubenswrapper[4988]: E1123 08:04:52.216842 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-hgthb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" podUID="fb2ada1f-f34a-476c-8cf7-901acb199c16" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.231974 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.233293 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.233565 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgthb\" (UniqueName: \"kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.233604 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.233713 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.234603 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.234805 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.246834 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.248345 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.268465 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgthb\" (UniqueName: \"kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb\") pod \"dnsmasq-dns-6b7bf569cf-gvprw\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.336266 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.336324 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.336355 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.437433 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.437790 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.437837 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.438851 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.439512 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.473302 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs\") pod \"dnsmasq-dns-6d6bd8b8c5-zccpc\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.526603 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.550569 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.554616 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.556123 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.570761 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.632979 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.641857 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56x9\" (UniqueName: \"kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.641944 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.642000 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: W1123 08:04:52.642780 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc340c692_8fa1_47c0_b2a6_7b1df21130c8.slice/crio-4706fbe2b6e16c17f0865eaf21a80f78befeb60177054ca2a76f0fe4531d8ca4 WatchSource:0}: Error finding container 4706fbe2b6e16c17f0865eaf21a80f78befeb60177054ca2a76f0fe4531d8ca4: Status 404 returned error can't find the container with id 4706fbe2b6e16c17f0865eaf21a80f78befeb60177054ca2a76f0fe4531d8ca4 Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.744475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56x9\" (UniqueName: \"kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.744554 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.744606 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.745505 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.746872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.764011 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56x9\" (UniqueName: \"kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9\") pod \"dnsmasq-dns-74bc88c489-pg2jt\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:52 crc kubenswrapper[4988]: I1123 08:04:52.931095 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.012305 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:04:53 crc kubenswrapper[4988]: W1123 08:04:53.014362 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode965e4de_e657_44f7_a647_e6b3d6718d7c.slice/crio-21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b WatchSource:0}: Error finding container 21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b: Status 404 returned error can't find the container with id 21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.134054 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" event={"ID":"c340c692-8fa1-47c0-b2a6-7b1df21130c8","Type":"ContainerStarted","Data":"4706fbe2b6e16c17f0865eaf21a80f78befeb60177054ca2a76f0fe4531d8ca4"} Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.135822 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.136439 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" event={"ID":"e965e4de-e657-44f7-a647-e6b3d6718d7c","Type":"ContainerStarted","Data":"21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b"} Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.148962 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.251929 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc\") pod \"fb2ada1f-f34a-476c-8cf7-901acb199c16\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.252008 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config\") pod \"fb2ada1f-f34a-476c-8cf7-901acb199c16\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.252031 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgthb\" (UniqueName: \"kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb\") pod \"fb2ada1f-f34a-476c-8cf7-901acb199c16\" (UID: \"fb2ada1f-f34a-476c-8cf7-901acb199c16\") " Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.252889 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config" (OuterVolumeSpecName: "config") pod "fb2ada1f-f34a-476c-8cf7-901acb199c16" (UID: "fb2ada1f-f34a-476c-8cf7-901acb199c16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.253645 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb2ada1f-f34a-476c-8cf7-901acb199c16" (UID: "fb2ada1f-f34a-476c-8cf7-901acb199c16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.259743 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.259771 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2ada1f-f34a-476c-8cf7-901acb199c16-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.280478 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb" (OuterVolumeSpecName: "kube-api-access-hgthb") pod "fb2ada1f-f34a-476c-8cf7-901acb199c16" (UID: "fb2ada1f-f34a-476c-8cf7-901acb199c16"). InnerVolumeSpecName "kube-api-access-hgthb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.361626 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgthb\" (UniqueName: \"kubernetes.io/projected/fb2ada1f-f34a-476c-8cf7-901acb199c16-kube-api-access-hgthb\") on node \"crc\" DevicePath \"\"" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.395883 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.396997 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.399035 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.399238 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.399413 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.399613 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.399744 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-z64gb" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.400055 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.403536 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.418889 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462550 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462609 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462672 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqw4\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462750 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462775 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462803 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462825 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462857 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462902 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.462924 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.466968 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:04:53 crc kubenswrapper[4988]: W1123 08:04:53.501133 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc08f4458_1a95_439f_b110_6e16f7b80315.slice/crio-962ffb9141259314809e1104f0dc4bfdaa49c0d20dce752227b416458ac24592 WatchSource:0}: Error finding container 962ffb9141259314809e1104f0dc4bfdaa49c0d20dce752227b416458ac24592: Status 404 returned error can't find the container with id 962ffb9141259314809e1104f0dc4bfdaa49c0d20dce752227b416458ac24592 Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564041 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564098 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564131 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564169 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564235 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxqw4\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564261 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564299 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564319 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564341 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564376 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.564402 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.565118 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.565167 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.565721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.566056 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.566628 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.571187 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.571243 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3408bfff62d4ad41e60bfe9f001136a685473f5a05f5e149e50634d5817dca8c/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.571682 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.572115 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.575764 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.576019 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.590522 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxqw4\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.616988 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.673519 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.674770 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.676858 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.680519 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.682324 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dtnkp" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.682597 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.682704 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.682794 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.683823 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.690441 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.752776 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.767771 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.767816 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.767838 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxfl\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768064 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768167 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768242 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768287 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768328 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768360 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768386 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.768461 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.871825 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.871892 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.871937 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.871988 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872055 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872105 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxfl\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872152 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872240 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872296 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.872326 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.874343 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.874970 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.876797 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.877123 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.880606 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.880861 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d46a246ca526fcef24696c18c6f4480a3191c51b23a1173f24782f0c1da5792/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.881382 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.881912 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.885020 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.886030 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.891708 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.894736 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxfl\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.941582 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:53 crc kubenswrapper[4988]: I1123 08:04:53.995416 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.166121 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bf569cf-gvprw" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.166540 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" event={"ID":"c08f4458-1a95-439f-b110-6e16f7b80315","Type":"ContainerStarted","Data":"962ffb9141259314809e1104f0dc4bfdaa49c0d20dce752227b416458ac24592"} Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.261337 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bf569cf-gvprw"] Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.338342 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bf569cf-gvprw"] Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.360058 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.506790 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2ada1f-f34a-476c-8cf7-901acb199c16" path="/var/lib/kubelet/pods/fb2ada1f-f34a-476c-8cf7-901acb199c16/volumes" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.557003 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.558803 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.561011 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-85q7k" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.561213 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.561346 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.563255 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.567946 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.580318 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686105 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-default\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686186 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686232 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686255 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686274 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686300 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qlr4\" (UniqueName: \"kubernetes.io/projected/214ef4a2-145f-4545-9f14-634ff88be88a-kube-api-access-7qlr4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686326 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-kolla-config\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.686350 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788186 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-default\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788315 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788343 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788377 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788395 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788416 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qlr4\" (UniqueName: \"kubernetes.io/projected/214ef4a2-145f-4545-9f14-634ff88be88a-kube-api-access-7qlr4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788434 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-kolla-config\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.788474 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.789034 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.789593 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-config-data-default\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.791641 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-kolla-config\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.792945 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.792975 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/21402948577f1fcbe1fb11e3daf4fa7b732f00d13b7358637d320d8c2c40cb18/globalmount\"" pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.796775 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/214ef4a2-145f-4545-9f14-634ff88be88a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.801240 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.804029 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214ef4a2-145f-4545-9f14-634ff88be88a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.811736 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qlr4\" (UniqueName: \"kubernetes.io/projected/214ef4a2-145f-4545-9f14-634ff88be88a-kube-api-access-7qlr4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.833119 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a7fe7ce3-f277-45e3-8eb1-64140035a1f4\") pod \"openstack-galera-0\" (UID: \"214ef4a2-145f-4545-9f14-634ff88be88a\") " pod="openstack/openstack-galera-0" Nov 23 08:04:54 crc kubenswrapper[4988]: I1123 08:04:54.887572 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 08:04:55 crc kubenswrapper[4988]: I1123 08:04:55.738201 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:04:55 crc kubenswrapper[4988]: W1123 08:04:55.749790 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f36dd15_56cc_4eb7_93f6_3e756c558d46.slice/crio-7087a43fe8cbff2a9233203765b48021c1caabbf7fc4707f204743fb9ac38115 WatchSource:0}: Error finding container 7087a43fe8cbff2a9233203765b48021c1caabbf7fc4707f204743fb9ac38115: Status 404 returned error can't find the container with id 7087a43fe8cbff2a9233203765b48021c1caabbf7fc4707f204743fb9ac38115 Nov 23 08:04:55 crc kubenswrapper[4988]: I1123 08:04:55.834277 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.056213 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.057554 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.060716 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.060960 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tt5rz" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.061113 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.061263 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.086159 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.117638 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv884\" (UniqueName: \"kubernetes.io/projected/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kube-api-access-hv884\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.117696 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.117742 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.117952 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.118008 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.118105 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.118163 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.118264 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.184204 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerStarted","Data":"7087a43fe8cbff2a9233203765b48021c1caabbf7fc4707f204743fb9ac38115"} Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.185529 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerStarted","Data":"18f4c55d93981884c29856af3a62c51c719aa9aca4d843a51ecb742901dff3d9"} Nov 23 08:04:56 crc kubenswrapper[4988]: W1123 08:04:56.215430 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214ef4a2_145f_4545_9f14_634ff88be88a.slice/crio-283bdfc485e39b3a6ac20070ac0559005575bfb92b2fd0b23c4f9631388142a8 WatchSource:0}: Error finding container 283bdfc485e39b3a6ac20070ac0559005575bfb92b2fd0b23c4f9631388142a8: Status 404 returned error can't find the container with id 283bdfc485e39b3a6ac20070ac0559005575bfb92b2fd0b23c4f9631388142a8 Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220455 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220696 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220773 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv884\" (UniqueName: \"kubernetes.io/projected/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kube-api-access-hv884\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220795 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220841 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220877 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.220922 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.222658 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.223496 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.224453 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.226927 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.227080 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5a31e193-64cd-4be2-bbe6-9b899d22c30f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.227601 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.227621 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ddcc7e8318dee412c0b944b860cf76558f1df2124e894962ae41d4b853f83199/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.227935 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a31e193-64cd-4be2-bbe6-9b899d22c30f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.236564 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv884\" (UniqueName: \"kubernetes.io/projected/5a31e193-64cd-4be2-bbe6-9b899d22c30f-kube-api-access-hv884\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.258983 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a0d3dd03-9972-41c2-bfe6-f722d24a1f77\") pod \"openstack-cell1-galera-0\" (UID: \"5a31e193-64cd-4be2-bbe6-9b899d22c30f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.398252 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.459622 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.460924 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.463242 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.463280 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.463450 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5t79n" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.476762 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.525807 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kolla-config\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.525881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-config-data\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.525927 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.525985 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.526038 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbsd\" (UniqueName: \"kubernetes.io/projected/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kube-api-access-wwbsd\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.627938 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwbsd\" (UniqueName: \"kubernetes.io/projected/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kube-api-access-wwbsd\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.628269 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kolla-config\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.628304 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-config-data\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.628328 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.628380 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.630067 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kolla-config\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.630113 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-config-data\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.633415 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.644774 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.647672 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwbsd\" (UniqueName: \"kubernetes.io/projected/1cd2fb7e-7cf8-40c2-8274-f81ed5838b04-kube-api-access-wwbsd\") pod \"memcached-0\" (UID: \"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04\") " pod="openstack/memcached-0" Nov 23 08:04:56 crc kubenswrapper[4988]: I1123 08:04:56.791594 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 08:04:57 crc kubenswrapper[4988]: I1123 08:04:57.196336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"214ef4a2-145f-4545-9f14-634ff88be88a","Type":"ContainerStarted","Data":"283bdfc485e39b3a6ac20070ac0559005575bfb92b2fd0b23c4f9631388142a8"} Nov 23 08:04:59 crc kubenswrapper[4988]: I1123 08:04:59.496709 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.048029 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.151163 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 08:05:09 crc kubenswrapper[4988]: W1123 08:05:09.174008 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cd2fb7e_7cf8_40c2_8274_f81ed5838b04.slice/crio-7af8ff1959ca4c86e7c36767beb2c98bfff4dbf94d433b2caddd660cdbe59bd1 WatchSource:0}: Error finding container 7af8ff1959ca4c86e7c36767beb2c98bfff4dbf94d433b2caddd660cdbe59bd1: Status 404 returned error can't find the container with id 7af8ff1959ca4c86e7c36767beb2c98bfff4dbf94d433b2caddd660cdbe59bd1 Nov 23 08:05:09 crc kubenswrapper[4988]: W1123 08:05:09.174343 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a31e193_64cd_4be2_bbe6_9b899d22c30f.slice/crio-589b9d2dd00fbd752ed3b963a75f64c9ef5ab43984c4594c9a752fd65e2d4738 WatchSource:0}: Error finding container 589b9d2dd00fbd752ed3b963a75f64c9ef5ab43984c4594c9a752fd65e2d4738: Status 404 returned error can't find the container with id 589b9d2dd00fbd752ed3b963a75f64c9ef5ab43984c4594c9a752fd65e2d4738 Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.310615 4988 generic.go:334] "Generic (PLEG): container finished" podID="c08f4458-1a95-439f-b110-6e16f7b80315" containerID="d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a" exitCode=0 Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.310994 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" event={"ID":"c08f4458-1a95-439f-b110-6e16f7b80315","Type":"ContainerDied","Data":"d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.313539 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a31e193-64cd-4be2-bbe6-9b899d22c30f","Type":"ContainerStarted","Data":"589b9d2dd00fbd752ed3b963a75f64c9ef5ab43984c4594c9a752fd65e2d4738"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.316694 4988 generic.go:334] "Generic (PLEG): container finished" podID="c340c692-8fa1-47c0-b2a6-7b1df21130c8" containerID="b002259edb6d46a60ee872c224a8a9e32bcb1d7ee14b786374d17d2d24feb169" exitCode=0 Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.316784 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" event={"ID":"c340c692-8fa1-47c0-b2a6-7b1df21130c8","Type":"ContainerDied","Data":"b002259edb6d46a60ee872c224a8a9e32bcb1d7ee14b786374d17d2d24feb169"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.322563 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"214ef4a2-145f-4545-9f14-634ff88be88a","Type":"ContainerStarted","Data":"ec8e58575aa6829e385a4b045fa36e2ec61e7380500708c75a48e91bf717d999"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.327186 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.334709 4988 generic.go:334] "Generic (PLEG): container finished" podID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerID="45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e" exitCode=0 Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.334822 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" event={"ID":"e965e4de-e657-44f7-a647-e6b3d6718d7c","Type":"ContainerDied","Data":"45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e"} Nov 23 08:05:09 crc kubenswrapper[4988]: I1123 08:05:09.338802 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04","Type":"ContainerStarted","Data":"7af8ff1959ca4c86e7c36767beb2c98bfff4dbf94d433b2caddd660cdbe59bd1"} Nov 23 08:05:09 crc kubenswrapper[4988]: E1123 08:05:09.632711 4988 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 23 08:05:09 crc kubenswrapper[4988]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e965e4de-e657-44f7-a647-e6b3d6718d7c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 08:05:09 crc kubenswrapper[4988]: > podSandboxID="21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b" Nov 23 08:05:09 crc kubenswrapper[4988]: E1123 08:05:09.632880 4988 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 23 08:05:09 crc kubenswrapper[4988]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljrcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d6bd8b8c5-zccpc_openstack(e965e4de-e657-44f7-a647-e6b3d6718d7c): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e965e4de-e657-44f7-a647-e6b3d6718d7c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 23 08:05:09 crc kubenswrapper[4988]: > logger="UnhandledError" Nov 23 08:05:09 crc kubenswrapper[4988]: E1123 08:05:09.633958 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e965e4de-e657-44f7-a647-e6b3d6718d7c/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.239217 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.350778 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerStarted","Data":"dad119f0b594536a909c1edefc336251d032d44b6f0b9a72551951248420978a"} Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.365252 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" event={"ID":"c08f4458-1a95-439f-b110-6e16f7b80315","Type":"ContainerStarted","Data":"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5"} Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.366028 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.368017 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a31e193-64cd-4be2-bbe6-9b899d22c30f","Type":"ContainerStarted","Data":"b52fa60782fd9cd140d6e67167211e5b8cf97c7ca3e0ed4694d97067e871c8c5"} Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.369485 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.369500 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bbc675f-bkvfq" event={"ID":"c340c692-8fa1-47c0-b2a6-7b1df21130c8","Type":"ContainerDied","Data":"4706fbe2b6e16c17f0865eaf21a80f78befeb60177054ca2a76f0fe4531d8ca4"} Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.369523 4988 scope.go:117] "RemoveContainer" containerID="b002259edb6d46a60ee872c224a8a9e32bcb1d7ee14b786374d17d2d24feb169" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.374637 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerStarted","Data":"aba3514fbcb5d997efcb6ca3bbac23486cfe1410000c86d45f92ce2fb99823cd"} Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.385984 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config\") pod \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.386061 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4gc2\" (UniqueName: \"kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2\") pod \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\" (UID: \"c340c692-8fa1-47c0-b2a6-7b1df21130c8\") " Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.417339 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config" (OuterVolumeSpecName: "config") pod "c340c692-8fa1-47c0-b2a6-7b1df21130c8" (UID: "c340c692-8fa1-47c0-b2a6-7b1df21130c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.417605 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2" (OuterVolumeSpecName: "kube-api-access-h4gc2") pod "c340c692-8fa1-47c0-b2a6-7b1df21130c8" (UID: "c340c692-8fa1-47c0-b2a6-7b1df21130c8"). InnerVolumeSpecName "kube-api-access-h4gc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.417686 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" podStartSLOduration=3.175980376 podStartE2EDuration="18.417668708s" podCreationTimestamp="2025-11-23 08:04:52 +0000 UTC" firstStartedPulling="2025-11-23 08:04:53.50767587 +0000 UTC m=+4745.816188633" lastFinishedPulling="2025-11-23 08:05:08.749364212 +0000 UTC m=+4761.057876965" observedRunningTime="2025-11-23 08:05:10.417631697 +0000 UTC m=+4762.726144490" watchObservedRunningTime="2025-11-23 08:05:10.417668708 +0000 UTC m=+4762.726181471" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.487869 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4gc2\" (UniqueName: \"kubernetes.io/projected/c340c692-8fa1-47c0-b2a6-7b1df21130c8-kube-api-access-h4gc2\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.487908 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c340c692-8fa1-47c0-b2a6-7b1df21130c8-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.703970 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:05:10 crc kubenswrapper[4988]: I1123 08:05:10.710407 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bbc675f-bkvfq"] Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.396724 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" event={"ID":"e965e4de-e657-44f7-a647-e6b3d6718d7c","Type":"ContainerStarted","Data":"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f"} Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.397915 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.400975 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1cd2fb7e-7cf8-40c2-8274-f81ed5838b04","Type":"ContainerStarted","Data":"6d6e93de9d319158ff5c5e27bb6a0e70abdbae91f84aa752db330e89c3c42e5d"} Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.401419 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.435899 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" podStartSLOduration=4.781502454 podStartE2EDuration="20.435873977s" podCreationTimestamp="2025-11-23 08:04:52 +0000 UTC" firstStartedPulling="2025-11-23 08:04:53.018585177 +0000 UTC m=+4745.327097940" lastFinishedPulling="2025-11-23 08:05:08.67295669 +0000 UTC m=+4760.981469463" observedRunningTime="2025-11-23 08:05:12.425566405 +0000 UTC m=+4764.734079208" watchObservedRunningTime="2025-11-23 08:05:12.435873977 +0000 UTC m=+4764.744386770" Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.462122 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=14.276439986 podStartE2EDuration="16.4620934s" podCreationTimestamp="2025-11-23 08:04:56 +0000 UTC" firstStartedPulling="2025-11-23 08:05:09.176368053 +0000 UTC m=+4761.484880816" lastFinishedPulling="2025-11-23 08:05:11.362021467 +0000 UTC m=+4763.670534230" observedRunningTime="2025-11-23 08:05:12.44865007 +0000 UTC m=+4764.757162873" watchObservedRunningTime="2025-11-23 08:05:12.4620934 +0000 UTC m=+4764.770606203" Nov 23 08:05:12 crc kubenswrapper[4988]: I1123 08:05:12.514041 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c340c692-8fa1-47c0-b2a6-7b1df21130c8" path="/var/lib/kubelet/pods/c340c692-8fa1-47c0-b2a6-7b1df21130c8/volumes" Nov 23 08:05:13 crc kubenswrapper[4988]: I1123 08:05:13.410299 4988 generic.go:334] "Generic (PLEG): container finished" podID="5a31e193-64cd-4be2-bbe6-9b899d22c30f" containerID="b52fa60782fd9cd140d6e67167211e5b8cf97c7ca3e0ed4694d97067e871c8c5" exitCode=0 Nov 23 08:05:13 crc kubenswrapper[4988]: I1123 08:05:13.410355 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a31e193-64cd-4be2-bbe6-9b899d22c30f","Type":"ContainerDied","Data":"b52fa60782fd9cd140d6e67167211e5b8cf97c7ca3e0ed4694d97067e871c8c5"} Nov 23 08:05:13 crc kubenswrapper[4988]: I1123 08:05:13.413349 4988 generic.go:334] "Generic (PLEG): container finished" podID="214ef4a2-145f-4545-9f14-634ff88be88a" containerID="ec8e58575aa6829e385a4b045fa36e2ec61e7380500708c75a48e91bf717d999" exitCode=0 Nov 23 08:05:13 crc kubenswrapper[4988]: I1123 08:05:13.413701 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"214ef4a2-145f-4545-9f14-634ff88be88a","Type":"ContainerDied","Data":"ec8e58575aa6829e385a4b045fa36e2ec61e7380500708c75a48e91bf717d999"} Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.427379 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5a31e193-64cd-4be2-bbe6-9b899d22c30f","Type":"ContainerStarted","Data":"2c227030d7c62f4f219366ea6935826d0a1d1de0a338b2df69b6b7a490ce088a"} Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.429205 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"214ef4a2-145f-4545-9f14-634ff88be88a","Type":"ContainerStarted","Data":"e8b03909aca61ded2ca5ef6a0f851fc4b9a631ade131a7499144d4aeaa6a2fb7"} Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.462834 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.462807762 podStartE2EDuration="19.462807762s" podCreationTimestamp="2025-11-23 08:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:05:14.45090533 +0000 UTC m=+4766.759418133" watchObservedRunningTime="2025-11-23 08:05:14.462807762 +0000 UTC m=+4766.771320575" Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.488088 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.000874149 podStartE2EDuration="21.48806426s" podCreationTimestamp="2025-11-23 08:04:53 +0000 UTC" firstStartedPulling="2025-11-23 08:04:56.217327492 +0000 UTC m=+4748.525840255" lastFinishedPulling="2025-11-23 08:05:08.704517593 +0000 UTC m=+4761.013030366" observedRunningTime="2025-11-23 08:05:14.477457291 +0000 UTC m=+4766.785970094" watchObservedRunningTime="2025-11-23 08:05:14.48806426 +0000 UTC m=+4766.796577063" Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.888722 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 08:05:14 crc kubenswrapper[4988]: I1123 08:05:14.889163 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 08:05:16 crc kubenswrapper[4988]: I1123 08:05:16.399360 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 08:05:16 crc kubenswrapper[4988]: I1123 08:05:16.399528 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 08:05:16 crc kubenswrapper[4988]: I1123 08:05:16.794348 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 08:05:17 crc kubenswrapper[4988]: I1123 08:05:17.552440 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:05:17 crc kubenswrapper[4988]: I1123 08:05:17.933448 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.020115 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.465923 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="dnsmasq-dns" containerID="cri-o://ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f" gracePeriod=10 Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.883882 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.923282 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc\") pod \"e965e4de-e657-44f7-a647-e6b3d6718d7c\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.923365 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config\") pod \"e965e4de-e657-44f7-a647-e6b3d6718d7c\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.923451 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs\") pod \"e965e4de-e657-44f7-a647-e6b3d6718d7c\" (UID: \"e965e4de-e657-44f7-a647-e6b3d6718d7c\") " Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.930770 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs" (OuterVolumeSpecName: "kube-api-access-ljrcs") pod "e965e4de-e657-44f7-a647-e6b3d6718d7c" (UID: "e965e4de-e657-44f7-a647-e6b3d6718d7c"). InnerVolumeSpecName "kube-api-access-ljrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.970872 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e965e4de-e657-44f7-a647-e6b3d6718d7c" (UID: "e965e4de-e657-44f7-a647-e6b3d6718d7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.976616 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 08:05:18 crc kubenswrapper[4988]: I1123 08:05:18.984417 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config" (OuterVolumeSpecName: "config") pod "e965e4de-e657-44f7-a647-e6b3d6718d7c" (UID: "e965e4de-e657-44f7-a647-e6b3d6718d7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.026760 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.026808 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e965e4de-e657-44f7-a647-e6b3d6718d7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.026831 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/e965e4de-e657-44f7-a647-e6b3d6718d7c-kube-api-access-ljrcs\") on node \"crc\" DevicePath \"\"" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.068565 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.478598 4988 generic.go:334] "Generic (PLEG): container finished" podID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerID="ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f" exitCode=0 Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.478736 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.478731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" event={"ID":"e965e4de-e657-44f7-a647-e6b3d6718d7c","Type":"ContainerDied","Data":"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f"} Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.479433 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-zccpc" event={"ID":"e965e4de-e657-44f7-a647-e6b3d6718d7c","Type":"ContainerDied","Data":"21aae6d1158a3166855f247de5f8d02497001a0e3a42bbf889105f9e7e86486b"} Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.479488 4988 scope.go:117] "RemoveContainer" containerID="ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.517611 4988 scope.go:117] "RemoveContainer" containerID="45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.524742 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.530770 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-zccpc"] Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.546531 4988 scope.go:117] "RemoveContainer" containerID="ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f" Nov 23 08:05:19 crc kubenswrapper[4988]: E1123 08:05:19.547050 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f\": container with ID starting with ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f not found: ID does not exist" containerID="ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.547133 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f"} err="failed to get container status \"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f\": rpc error: code = NotFound desc = could not find container \"ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f\": container with ID starting with ca566dc8d983941bae9a970b34687b740d679e3ad394ab11145cf4293a3b506f not found: ID does not exist" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.547186 4988 scope.go:117] "RemoveContainer" containerID="45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e" Nov 23 08:05:19 crc kubenswrapper[4988]: E1123 08:05:19.547727 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e\": container with ID starting with 45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e not found: ID does not exist" containerID="45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e" Nov 23 08:05:19 crc kubenswrapper[4988]: I1123 08:05:19.547768 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e"} err="failed to get container status \"45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e\": rpc error: code = NotFound desc = could not find container \"45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e\": container with ID starting with 45ca4745a73ff14994672d5ee0b87fdd18b5cdce99fcaee8037471cdb9ac698e not found: ID does not exist" Nov 23 08:05:20 crc kubenswrapper[4988]: I1123 08:05:20.508903 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" path="/var/lib/kubelet/pods/e965e4de-e657-44f7-a647-e6b3d6718d7c/volumes" Nov 23 08:05:20 crc kubenswrapper[4988]: I1123 08:05:20.510701 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 08:05:20 crc kubenswrapper[4988]: I1123 08:05:20.620438 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 08:05:42 crc kubenswrapper[4988]: I1123 08:05:42.698140 4988 generic.go:334] "Generic (PLEG): container finished" podID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerID="aba3514fbcb5d997efcb6ca3bbac23486cfe1410000c86d45f92ce2fb99823cd" exitCode=0 Nov 23 08:05:42 crc kubenswrapper[4988]: I1123 08:05:42.698304 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerDied","Data":"aba3514fbcb5d997efcb6ca3bbac23486cfe1410000c86d45f92ce2fb99823cd"} Nov 23 08:05:42 crc kubenswrapper[4988]: I1123 08:05:42.703129 4988 generic.go:334] "Generic (PLEG): container finished" podID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerID="dad119f0b594536a909c1edefc336251d032d44b6f0b9a72551951248420978a" exitCode=0 Nov 23 08:05:42 crc kubenswrapper[4988]: I1123 08:05:42.703186 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerDied","Data":"dad119f0b594536a909c1edefc336251d032d44b6f0b9a72551951248420978a"} Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.713711 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerStarted","Data":"00ad0224719e8c058156032c59b7932397c1524c67dd79db9a3e400da41699d1"} Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.714501 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.716534 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerStarted","Data":"2cbc9c80965fe06441143100f23f68a85205e4404401eec41f392f7aed200c2b"} Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.717630 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.747176 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.434816083 podStartE2EDuration="51.747152152s" podCreationTimestamp="2025-11-23 08:04:52 +0000 UTC" firstStartedPulling="2025-11-23 08:04:55.287630472 +0000 UTC m=+4747.596143235" lastFinishedPulling="2025-11-23 08:05:08.599966541 +0000 UTC m=+4760.908479304" observedRunningTime="2025-11-23 08:05:43.739483284 +0000 UTC m=+4796.047996087" watchObservedRunningTime="2025-11-23 08:05:43.747152152 +0000 UTC m=+4796.055664955" Nov 23 08:05:43 crc kubenswrapper[4988]: I1123 08:05:43.780795 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.915716407 podStartE2EDuration="51.780766686s" podCreationTimestamp="2025-11-23 08:04:52 +0000 UTC" firstStartedPulling="2025-11-23 08:04:55.753354534 +0000 UTC m=+4748.061867297" lastFinishedPulling="2025-11-23 08:05:08.618404803 +0000 UTC m=+4760.926917576" observedRunningTime="2025-11-23 08:05:43.775845935 +0000 UTC m=+4796.084358708" watchObservedRunningTime="2025-11-23 08:05:43.780766686 +0000 UTC m=+4796.089279489" Nov 23 08:05:53 crc kubenswrapper[4988]: I1123 08:05:53.758502 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 08:05:53 crc kubenswrapper[4988]: I1123 08:05:53.998371 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.569532 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:05:59 crc kubenswrapper[4988]: E1123 08:05:59.570889 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c340c692-8fa1-47c0-b2a6-7b1df21130c8" containerName="init" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.570914 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c340c692-8fa1-47c0-b2a6-7b1df21130c8" containerName="init" Nov 23 08:05:59 crc kubenswrapper[4988]: E1123 08:05:59.570960 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="init" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.570972 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="init" Nov 23 08:05:59 crc kubenswrapper[4988]: E1123 08:05:59.570988 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="dnsmasq-dns" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.570999 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="dnsmasq-dns" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.571295 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c340c692-8fa1-47c0-b2a6-7b1df21130c8" containerName="init" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.571322 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e965e4de-e657-44f7-a647-e6b3d6718d7c" containerName="dnsmasq-dns" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.572590 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.582284 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.707983 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qtw\" (UniqueName: \"kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.708060 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.708159 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.809530 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qtw\" (UniqueName: \"kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.809606 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.809703 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.811244 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.811472 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.834264 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qtw\" (UniqueName: \"kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw\") pod \"dnsmasq-dns-97464f77-cjzc6\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:05:59 crc kubenswrapper[4988]: I1123 08:05:59.916637 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:06:00 crc kubenswrapper[4988]: I1123 08:06:00.376839 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:00 crc kubenswrapper[4988]: I1123 08:06:00.417464 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:06:00 crc kubenswrapper[4988]: W1123 08:06:00.423452 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95ee697_07b0_42af_8d15_57661cd6de66.slice/crio-bd2df50b630c97a876c89b01daf2fa5cabb31aa4dee9bf01ba6cb88f5d3c5224 WatchSource:0}: Error finding container bd2df50b630c97a876c89b01daf2fa5cabb31aa4dee9bf01ba6cb88f5d3c5224: Status 404 returned error can't find the container with id bd2df50b630c97a876c89b01daf2fa5cabb31aa4dee9bf01ba6cb88f5d3c5224 Nov 23 08:06:00 crc kubenswrapper[4988]: I1123 08:06:00.895248 4988 generic.go:334] "Generic (PLEG): container finished" podID="d95ee697-07b0-42af-8d15-57661cd6de66" containerID="22b38bf2fe9724ef0c8f7ddd3f3ea51b156c053091684fc33e46897e87b3b899" exitCode=0 Nov 23 08:06:00 crc kubenswrapper[4988]: I1123 08:06:00.895564 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-cjzc6" event={"ID":"d95ee697-07b0-42af-8d15-57661cd6de66","Type":"ContainerDied","Data":"22b38bf2fe9724ef0c8f7ddd3f3ea51b156c053091684fc33e46897e87b3b899"} Nov 23 08:06:00 crc kubenswrapper[4988]: I1123 08:06:00.895641 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-cjzc6" event={"ID":"d95ee697-07b0-42af-8d15-57661cd6de66","Type":"ContainerStarted","Data":"bd2df50b630c97a876c89b01daf2fa5cabb31aa4dee9bf01ba6cb88f5d3c5224"} Nov 23 08:06:01 crc kubenswrapper[4988]: I1123 08:06:01.150109 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:01 crc kubenswrapper[4988]: I1123 08:06:01.908310 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-cjzc6" event={"ID":"d95ee697-07b0-42af-8d15-57661cd6de66","Type":"ContainerStarted","Data":"59566b8bc3b63316ff5ca4ff2481134ff43cc3308e612e89ed546eddeec3159d"} Nov 23 08:06:01 crc kubenswrapper[4988]: I1123 08:06:01.908592 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:06:01 crc kubenswrapper[4988]: I1123 08:06:01.935991 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-97464f77-cjzc6" podStartSLOduration=2.935975782 podStartE2EDuration="2.935975782s" podCreationTimestamp="2025-11-23 08:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:06:01.931764669 +0000 UTC m=+4814.240277432" watchObservedRunningTime="2025-11-23 08:06:01.935975782 +0000 UTC m=+4814.244488545" Nov 23 08:06:04 crc kubenswrapper[4988]: I1123 08:06:04.788711 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="rabbitmq" containerID="cri-o://00ad0224719e8c058156032c59b7932397c1524c67dd79db9a3e400da41699d1" gracePeriod=604796 Nov 23 08:06:05 crc kubenswrapper[4988]: I1123 08:06:05.714300 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="rabbitmq" containerID="cri-o://2cbc9c80965fe06441143100f23f68a85205e4404401eec41f392f7aed200c2b" gracePeriod=604796 Nov 23 08:06:09 crc kubenswrapper[4988]: I1123 08:06:09.918517 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:06:09 crc kubenswrapper[4988]: I1123 08:06:09.988015 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:06:09 crc kubenswrapper[4988]: I1123 08:06:09.988741 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="dnsmasq-dns" containerID="cri-o://384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5" gracePeriod=10 Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.383596 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.494114 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc\") pod \"c08f4458-1a95-439f-b110-6e16f7b80315\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.494216 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56x9\" (UniqueName: \"kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9\") pod \"c08f4458-1a95-439f-b110-6e16f7b80315\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.494251 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config\") pod \"c08f4458-1a95-439f-b110-6e16f7b80315\" (UID: \"c08f4458-1a95-439f-b110-6e16f7b80315\") " Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.504367 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9" (OuterVolumeSpecName: "kube-api-access-l56x9") pod "c08f4458-1a95-439f-b110-6e16f7b80315" (UID: "c08f4458-1a95-439f-b110-6e16f7b80315"). InnerVolumeSpecName "kube-api-access-l56x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.554138 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c08f4458-1a95-439f-b110-6e16f7b80315" (UID: "c08f4458-1a95-439f-b110-6e16f7b80315"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.566239 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config" (OuterVolumeSpecName: "config") pod "c08f4458-1a95-439f-b110-6e16f7b80315" (UID: "c08f4458-1a95-439f-b110-6e16f7b80315"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.597004 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.597138 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56x9\" (UniqueName: \"kubernetes.io/projected/c08f4458-1a95-439f-b110-6e16f7b80315-kube-api-access-l56x9\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:10 crc kubenswrapper[4988]: I1123 08:06:10.597241 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08f4458-1a95-439f-b110-6e16f7b80315-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.011621 4988 generic.go:334] "Generic (PLEG): container finished" podID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerID="00ad0224719e8c058156032c59b7932397c1524c67dd79db9a3e400da41699d1" exitCode=0 Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.011696 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerDied","Data":"00ad0224719e8c058156032c59b7932397c1524c67dd79db9a3e400da41699d1"} Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.014691 4988 generic.go:334] "Generic (PLEG): container finished" podID="c08f4458-1a95-439f-b110-6e16f7b80315" containerID="384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5" exitCode=0 Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.014741 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" event={"ID":"c08f4458-1a95-439f-b110-6e16f7b80315","Type":"ContainerDied","Data":"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5"} Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.014768 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" event={"ID":"c08f4458-1a95-439f-b110-6e16f7b80315","Type":"ContainerDied","Data":"962ffb9141259314809e1104f0dc4bfdaa49c0d20dce752227b416458ac24592"} Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.014793 4988 scope.go:117] "RemoveContainer" containerID="384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.014953 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-pg2jt" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.065623 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.076802 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-pg2jt"] Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.097172 4988 scope.go:117] "RemoveContainer" containerID="d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.164309 4988 scope.go:117] "RemoveContainer" containerID="384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5" Nov 23 08:06:11 crc kubenswrapper[4988]: E1123 08:06:11.167753 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5\": container with ID starting with 384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5 not found: ID does not exist" containerID="384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.167819 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5"} err="failed to get container status \"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5\": rpc error: code = NotFound desc = could not find container \"384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5\": container with ID starting with 384a0f823635d469631361700ea28d15e8fce057bb3fc8bd9396bcb8599086e5 not found: ID does not exist" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.167886 4988 scope.go:117] "RemoveContainer" containerID="d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a" Nov 23 08:06:11 crc kubenswrapper[4988]: E1123 08:06:11.168313 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a\": container with ID starting with d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a not found: ID does not exist" containerID="d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.168348 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a"} err="failed to get container status \"d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a\": rpc error: code = NotFound desc = could not find container \"d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a\": container with ID starting with d265aff039b117209c2d1ce2f77781bea0160ff1cb0add213fbb32fc55f6273a not found: ID does not exist" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.357948 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.515596 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.516267 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.516419 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.516451 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.516495 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517120 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxqw4\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517216 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517256 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517306 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517338 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517366 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info\") pod \"886af2c6-ba57-498a-9c4a-c85f37b51f57\" (UID: \"886af2c6-ba57-498a-9c4a-c85f37b51f57\") " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517251 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517757 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.517792 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.518220 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.521917 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.524241 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info" (OuterVolumeSpecName: "pod-info") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.524420 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4" (OuterVolumeSpecName: "kube-api-access-pxqw4") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "kube-api-access-pxqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.535531 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.538351 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca" (OuterVolumeSpecName: "persistence") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.558050 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data" (OuterVolumeSpecName: "config-data") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.589816 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf" (OuterVolumeSpecName: "server-conf") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.610396 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "886af2c6-ba57-498a-9c4a-c85f37b51f57" (UID: "886af2c6-ba57-498a-9c4a-c85f37b51f57"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619076 4988 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619098 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619111 4988 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619122 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619133 4988 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/886af2c6-ba57-498a-9c4a-c85f37b51f57-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619143 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619155 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/886af2c6-ba57-498a-9c4a-c85f37b51f57-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619256 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") on node \"crc\" " Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619362 4988 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/886af2c6-ba57-498a-9c4a-c85f37b51f57-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.619376 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxqw4\" (UniqueName: \"kubernetes.io/projected/886af2c6-ba57-498a-9c4a-c85f37b51f57-kube-api-access-pxqw4\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.633864 4988 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.633979 4988 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca") on node "crc" Nov 23 08:06:11 crc kubenswrapper[4988]: I1123 08:06:11.721505 4988 reconciler_common.go:293] "Volume detached for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.027860 4988 generic.go:334] "Generic (PLEG): container finished" podID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerID="2cbc9c80965fe06441143100f23f68a85205e4404401eec41f392f7aed200c2b" exitCode=0 Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.027972 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerDied","Data":"2cbc9c80965fe06441143100f23f68a85205e4404401eec41f392f7aed200c2b"} Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.033905 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"886af2c6-ba57-498a-9c4a-c85f37b51f57","Type":"ContainerDied","Data":"18f4c55d93981884c29856af3a62c51c719aa9aca4d843a51ecb742901dff3d9"} Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.033988 4988 scope.go:117] "RemoveContainer" containerID="00ad0224719e8c058156032c59b7932397c1524c67dd79db9a3e400da41699d1" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.034062 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.079650 4988 scope.go:117] "RemoveContainer" containerID="aba3514fbcb5d997efcb6ca3bbac23486cfe1410000c86d45f92ce2fb99823cd" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.098505 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.106979 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.147720 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:12 crc kubenswrapper[4988]: E1123 08:06:12.148269 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="init" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148292 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="init" Nov 23 08:06:12 crc kubenswrapper[4988]: E1123 08:06:12.148317 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="rabbitmq" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148327 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="rabbitmq" Nov 23 08:06:12 crc kubenswrapper[4988]: E1123 08:06:12.148347 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="setup-container" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148358 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="setup-container" Nov 23 08:06:12 crc kubenswrapper[4988]: E1123 08:06:12.148436 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="dnsmasq-dns" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148448 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="dnsmasq-dns" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148665 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" containerName="rabbitmq" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.148684 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" containerName="dnsmasq-dns" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.165356 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.168921 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.169295 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.169548 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-z64gb" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.169830 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.170000 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.170176 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.170459 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.171688 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.339868 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc03f11b-e69c-48c7-80e9-a044721fbf1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.339923 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc03f11b-e69c-48c7-80e9-a044721fbf1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.339949 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340178 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340282 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340353 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340397 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340440 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340512 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8gl6\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-kube-api-access-l8gl6\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340548 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.340601 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.382436 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442028 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442127 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmxfl\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442288 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442339 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442368 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442387 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442416 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442450 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442470 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442488 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442599 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\" (UID: \"9f36dd15-56cc-4eb7-93f6-3e756c558d46\") " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442811 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442854 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.442887 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.443465 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.444328 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.444710 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8gl6\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-kube-api-access-l8gl6\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.444721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.444741 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.449606 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.447126 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.448838 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.449807 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc03f11b-e69c-48c7-80e9-a044721fbf1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.449869 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc03f11b-e69c-48c7-80e9-a044721fbf1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.449930 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.450085 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.450183 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.450372 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.450406 4988 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.450428 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.451582 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc03f11b-e69c-48c7-80e9-a044721fbf1e-config-data\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.451867 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.452021 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.452868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.453129 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl" (OuterVolumeSpecName: "kube-api-access-nmxfl") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "kube-api-access-nmxfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.453337 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info" (OuterVolumeSpecName: "pod-info") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.454717 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.454781 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3408bfff62d4ad41e60bfe9f001136a685473f5a05f5e149e50634d5817dca8c/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.455591 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc03f11b-e69c-48c7-80e9-a044721fbf1e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.463868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc03f11b-e69c-48c7-80e9-a044721fbf1e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.464075 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.465007 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc03f11b-e69c-48c7-80e9-a044721fbf1e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.467122 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169" (OuterVolumeSpecName: "persistence") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.488721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8gl6\" (UniqueName: \"kubernetes.io/projected/cc03f11b-e69c-48c7-80e9-a044721fbf1e-kube-api-access-l8gl6\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.495612 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf" (OuterVolumeSpecName: "server-conf") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.502012 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data" (OuterVolumeSpecName: "config-data") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.505884 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886af2c6-ba57-498a-9c4a-c85f37b51f57" path="/var/lib/kubelet/pods/886af2c6-ba57-498a-9c4a-c85f37b51f57/volumes" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.506619 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08f4458-1a95-439f-b110-6e16f7b80315" path="/var/lib/kubelet/pods/c08f4458-1a95-439f-b110-6e16f7b80315/volumes" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.526997 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f40b092a-db32-4ca8-b3aa-abe210aec9ca\") pod \"rabbitmq-server-0\" (UID: \"cc03f11b-e69c-48c7-80e9-a044721fbf1e\") " pod="openstack/rabbitmq-server-0" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551829 4988 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f36dd15-56cc-4eb7-93f6-3e756c558d46-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551863 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551872 4988 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f36dd15-56cc-4eb7-93f6-3e756c558d46-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551879 4988 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f36dd15-56cc-4eb7-93f6-3e756c558d46-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551887 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551920 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") on node \"crc\" " Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.551934 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmxfl\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-kube-api-access-nmxfl\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.568398 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9f36dd15-56cc-4eb7-93f6-3e756c558d46" (UID: "9f36dd15-56cc-4eb7-93f6-3e756c558d46"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.573319 4988 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.573504 4988 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169") on node "crc" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.653887 4988 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f36dd15-56cc-4eb7-93f6-3e756c558d46-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.653954 4988 reconciler_common.go:293] "Volume detached for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:12 crc kubenswrapper[4988]: I1123 08:06:12.796698 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.049827 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9f36dd15-56cc-4eb7-93f6-3e756c558d46","Type":"ContainerDied","Data":"7087a43fe8cbff2a9233203765b48021c1caabbf7fc4707f204743fb9ac38115"} Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.050032 4988 scope.go:117] "RemoveContainer" containerID="2cbc9c80965fe06441143100f23f68a85205e4404401eec41f392f7aed200c2b" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.050064 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.072791 4988 scope.go:117] "RemoveContainer" containerID="dad119f0b594536a909c1edefc336251d032d44b6f0b9a72551951248420978a" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.085401 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.093706 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.104273 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: E1123 08:06:13.104617 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="rabbitmq" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.105082 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="rabbitmq" Nov 23 08:06:13 crc kubenswrapper[4988]: E1123 08:06:13.105108 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="setup-container" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.105116 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="setup-container" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.105280 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" containerName="rabbitmq" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.106105 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.114582 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.114759 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.114868 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dtnkp" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.114989 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.115102 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.115248 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.115414 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.130996 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262514 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262568 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262613 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbwq\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-kube-api-access-xjbwq\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262636 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262677 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262708 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f746d292-0944-4bc3-8abc-71f42dbe6957-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262736 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f746d292-0944-4bc3-8abc-71f42dbe6957-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262772 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262795 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262826 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.262852 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.299150 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364038 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364106 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f746d292-0944-4bc3-8abc-71f42dbe6957-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364157 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f746d292-0944-4bc3-8abc-71f42dbe6957-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364228 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364259 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364300 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364335 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364417 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364455 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364505 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbwq\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-kube-api-access-xjbwq\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.364538 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.365454 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.365509 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.365914 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.366138 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.366396 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f746d292-0944-4bc3-8abc-71f42dbe6957-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.367760 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.368071 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f746d292-0944-4bc3-8abc-71f42dbe6957-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.368251 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.370080 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.370233 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d46a246ca526fcef24696c18c6f4480a3191c51b23a1173f24782f0c1da5792/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.371647 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f746d292-0944-4bc3-8abc-71f42dbe6957-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.396606 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbwq\" (UniqueName: \"kubernetes.io/projected/f746d292-0944-4bc3-8abc-71f42dbe6957-kube-api-access-xjbwq\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.405910 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d953c2c9-c7af-4455-bea9-0cefe55cc169\") pod \"rabbitmq-cell1-server-0\" (UID: \"f746d292-0944-4bc3-8abc-71f42dbe6957\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.496474 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:13 crc kubenswrapper[4988]: I1123 08:06:13.780993 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:06:13 crc kubenswrapper[4988]: W1123 08:06:13.786459 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf746d292_0944_4bc3_8abc_71f42dbe6957.slice/crio-90e562b39e14a15281151671b3a6ddb6f08ef3c385022af4cdc33947f958e3f6 WatchSource:0}: Error finding container 90e562b39e14a15281151671b3a6ddb6f08ef3c385022af4cdc33947f958e3f6: Status 404 returned error can't find the container with id 90e562b39e14a15281151671b3a6ddb6f08ef3c385022af4cdc33947f958e3f6 Nov 23 08:06:14 crc kubenswrapper[4988]: I1123 08:06:14.064817 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc03f11b-e69c-48c7-80e9-a044721fbf1e","Type":"ContainerStarted","Data":"fe41fe1ec0991c717c7f91eb96019ceb82102a2017cb74b058e7f8ba3fde2976"} Nov 23 08:06:14 crc kubenswrapper[4988]: I1123 08:06:14.069565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f746d292-0944-4bc3-8abc-71f42dbe6957","Type":"ContainerStarted","Data":"90e562b39e14a15281151671b3a6ddb6f08ef3c385022af4cdc33947f958e3f6"} Nov 23 08:06:14 crc kubenswrapper[4988]: I1123 08:06:14.514171 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f36dd15-56cc-4eb7-93f6-3e756c558d46" path="/var/lib/kubelet/pods/9f36dd15-56cc-4eb7-93f6-3e756c558d46/volumes" Nov 23 08:06:16 crc kubenswrapper[4988]: I1123 08:06:16.094913 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc03f11b-e69c-48c7-80e9-a044721fbf1e","Type":"ContainerStarted","Data":"01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36"} Nov 23 08:06:16 crc kubenswrapper[4988]: I1123 08:06:16.098558 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f746d292-0944-4bc3-8abc-71f42dbe6957","Type":"ContainerStarted","Data":"bcba7f1a9a1ed8c0954dd12c218aaa216fc0f887c1821f0872a1bf659b0b4dde"} Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.531935 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.535379 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.548017 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.699906 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.699981 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.700438 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5d8\" (UniqueName: \"kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.802013 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.802331 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5d8\" (UniqueName: \"kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.802427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.802792 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.803062 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.835711 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5d8\" (UniqueName: \"kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8\") pod \"community-operators-jmfjc\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:25 crc kubenswrapper[4988]: I1123 08:06:25.878470 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:26 crc kubenswrapper[4988]: I1123 08:06:26.388105 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:26 crc kubenswrapper[4988]: W1123 08:06:26.396984 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13f2133_1e8e_4de6_86a3_db00df46d411.slice/crio-67f989d43a76f36dae2bf74c847970ce8efe179bd79304a6f7e56db0f1dc84e3 WatchSource:0}: Error finding container 67f989d43a76f36dae2bf74c847970ce8efe179bd79304a6f7e56db0f1dc84e3: Status 404 returned error can't find the container with id 67f989d43a76f36dae2bf74c847970ce8efe179bd79304a6f7e56db0f1dc84e3 Nov 23 08:06:27 crc kubenswrapper[4988]: I1123 08:06:27.204505 4988 generic.go:334] "Generic (PLEG): container finished" podID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerID="2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8" exitCode=0 Nov 23 08:06:27 crc kubenswrapper[4988]: I1123 08:06:27.204632 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerDied","Data":"2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8"} Nov 23 08:06:27 crc kubenswrapper[4988]: I1123 08:06:27.204967 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerStarted","Data":"67f989d43a76f36dae2bf74c847970ce8efe179bd79304a6f7e56db0f1dc84e3"} Nov 23 08:06:28 crc kubenswrapper[4988]: I1123 08:06:28.219251 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerStarted","Data":"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551"} Nov 23 08:06:29 crc kubenswrapper[4988]: I1123 08:06:29.229574 4988 generic.go:334] "Generic (PLEG): container finished" podID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerID="4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551" exitCode=0 Nov 23 08:06:29 crc kubenswrapper[4988]: I1123 08:06:29.229633 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerDied","Data":"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551"} Nov 23 08:06:30 crc kubenswrapper[4988]: I1123 08:06:30.242581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerStarted","Data":"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a"} Nov 23 08:06:30 crc kubenswrapper[4988]: I1123 08:06:30.278566 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jmfjc" podStartSLOduration=2.7449033099999998 podStartE2EDuration="5.278540808s" podCreationTimestamp="2025-11-23 08:06:25 +0000 UTC" firstStartedPulling="2025-11-23 08:06:27.207934183 +0000 UTC m=+4839.516446986" lastFinishedPulling="2025-11-23 08:06:29.741571701 +0000 UTC m=+4842.050084484" observedRunningTime="2025-11-23 08:06:30.267489877 +0000 UTC m=+4842.576002650" watchObservedRunningTime="2025-11-23 08:06:30.278540808 +0000 UTC m=+4842.587053611" Nov 23 08:06:35 crc kubenswrapper[4988]: I1123 08:06:35.878939 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:35 crc kubenswrapper[4988]: I1123 08:06:35.879614 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:35 crc kubenswrapper[4988]: I1123 08:06:35.944350 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:36 crc kubenswrapper[4988]: I1123 08:06:36.371923 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:36 crc kubenswrapper[4988]: I1123 08:06:36.448478 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.320227 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jmfjc" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="registry-server" containerID="cri-o://d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a" gracePeriod=2 Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.846402 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.987324 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md5d8\" (UniqueName: \"kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8\") pod \"b13f2133-1e8e-4de6-86a3-db00df46d411\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.987396 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content\") pod \"b13f2133-1e8e-4de6-86a3-db00df46d411\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.987573 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities\") pod \"b13f2133-1e8e-4de6-86a3-db00df46d411\" (UID: \"b13f2133-1e8e-4de6-86a3-db00df46d411\") " Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.988493 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities" (OuterVolumeSpecName: "utilities") pod "b13f2133-1e8e-4de6-86a3-db00df46d411" (UID: "b13f2133-1e8e-4de6-86a3-db00df46d411"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:38 crc kubenswrapper[4988]: I1123 08:06:38.996376 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8" (OuterVolumeSpecName: "kube-api-access-md5d8") pod "b13f2133-1e8e-4de6-86a3-db00df46d411" (UID: "b13f2133-1e8e-4de6-86a3-db00df46d411"). InnerVolumeSpecName "kube-api-access-md5d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.089295 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.089320 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md5d8\" (UniqueName: \"kubernetes.io/projected/b13f2133-1e8e-4de6-86a3-db00df46d411-kube-api-access-md5d8\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.332654 4988 generic.go:334] "Generic (PLEG): container finished" podID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerID="d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a" exitCode=0 Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.332712 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerDied","Data":"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a"} Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.332750 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jmfjc" event={"ID":"b13f2133-1e8e-4de6-86a3-db00df46d411","Type":"ContainerDied","Data":"67f989d43a76f36dae2bf74c847970ce8efe179bd79304a6f7e56db0f1dc84e3"} Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.332744 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jmfjc" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.332775 4988 scope.go:117] "RemoveContainer" containerID="d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.361267 4988 scope.go:117] "RemoveContainer" containerID="4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.389891 4988 scope.go:117] "RemoveContainer" containerID="2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.416140 4988 scope.go:117] "RemoveContainer" containerID="d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a" Nov 23 08:06:39 crc kubenswrapper[4988]: E1123 08:06:39.416741 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a\": container with ID starting with d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a not found: ID does not exist" containerID="d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.416774 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a"} err="failed to get container status \"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a\": rpc error: code = NotFound desc = could not find container \"d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a\": container with ID starting with d9076df0e2543f79c706c58a1a91fcccc21132c2c428993180b6f7b749280b7a not found: ID does not exist" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.416794 4988 scope.go:117] "RemoveContainer" containerID="4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551" Nov 23 08:06:39 crc kubenswrapper[4988]: E1123 08:06:39.418695 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551\": container with ID starting with 4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551 not found: ID does not exist" containerID="4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.418720 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551"} err="failed to get container status \"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551\": rpc error: code = NotFound desc = could not find container \"4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551\": container with ID starting with 4be115c4f64f802ca40d558adee2c684ead9fa3054f7cfffc10c536fb73fa551 not found: ID does not exist" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.418735 4988 scope.go:117] "RemoveContainer" containerID="2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8" Nov 23 08:06:39 crc kubenswrapper[4988]: E1123 08:06:39.419078 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8\": container with ID starting with 2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8 not found: ID does not exist" containerID="2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.419101 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8"} err="failed to get container status \"2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8\": rpc error: code = NotFound desc = could not find container \"2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8\": container with ID starting with 2a42768cd203a4a8bf53e8d01f1d2c2e89c44367b6d8f656005a6f6640c285a8 not found: ID does not exist" Nov 23 08:06:39 crc kubenswrapper[4988]: I1123 08:06:39.972128 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b13f2133-1e8e-4de6-86a3-db00df46d411" (UID: "b13f2133-1e8e-4de6-86a3-db00df46d411"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:06:40 crc kubenswrapper[4988]: I1123 08:06:40.016902 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13f2133-1e8e-4de6-86a3-db00df46d411-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:06:40 crc kubenswrapper[4988]: I1123 08:06:40.272823 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:40 crc kubenswrapper[4988]: I1123 08:06:40.284522 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jmfjc"] Nov 23 08:06:40 crc kubenswrapper[4988]: E1123 08:06:40.298414 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13f2133_1e8e_4de6_86a3_db00df46d411.slice\": RecentStats: unable to find data in memory cache]" Nov 23 08:06:40 crc kubenswrapper[4988]: I1123 08:06:40.510866 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" path="/var/lib/kubelet/pods/b13f2133-1e8e-4de6-86a3-db00df46d411/volumes" Nov 23 08:06:49 crc kubenswrapper[4988]: I1123 08:06:49.432445 4988 generic.go:334] "Generic (PLEG): container finished" podID="f746d292-0944-4bc3-8abc-71f42dbe6957" containerID="bcba7f1a9a1ed8c0954dd12c218aaa216fc0f887c1821f0872a1bf659b0b4dde" exitCode=0 Nov 23 08:06:49 crc kubenswrapper[4988]: I1123 08:06:49.432525 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f746d292-0944-4bc3-8abc-71f42dbe6957","Type":"ContainerDied","Data":"bcba7f1a9a1ed8c0954dd12c218aaa216fc0f887c1821f0872a1bf659b0b4dde"} Nov 23 08:06:49 crc kubenswrapper[4988]: I1123 08:06:49.436251 4988 generic.go:334] "Generic (PLEG): container finished" podID="cc03f11b-e69c-48c7-80e9-a044721fbf1e" containerID="01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36" exitCode=0 Nov 23 08:06:49 crc kubenswrapper[4988]: I1123 08:06:49.436321 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc03f11b-e69c-48c7-80e9-a044721fbf1e","Type":"ContainerDied","Data":"01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36"} Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.444316 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f746d292-0944-4bc3-8abc-71f42dbe6957","Type":"ContainerStarted","Data":"85df0221e96cd39bba5eacebba87bb6cc90bed94b8d16ce765ec498605f67f8f"} Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.444899 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.449146 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cc03f11b-e69c-48c7-80e9-a044721fbf1e","Type":"ContainerStarted","Data":"f9c1197ab236c05bf68f48f334ca89345805f7ded48b122bf13231e40dcd5e65"} Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.449468 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.476477 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.476460243 podStartE2EDuration="37.476460243s" podCreationTimestamp="2025-11-23 08:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:06:50.467476293 +0000 UTC m=+4862.775989136" watchObservedRunningTime="2025-11-23 08:06:50.476460243 +0000 UTC m=+4862.784973006" Nov 23 08:06:50 crc kubenswrapper[4988]: E1123 08:06:50.492351 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:06:50 crc kubenswrapper[4988]: I1123 08:06:50.497410 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.497388746 podStartE2EDuration="38.497388746s" podCreationTimestamp="2025-11-23 08:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:06:50.495025898 +0000 UTC m=+4862.803538671" watchObservedRunningTime="2025-11-23 08:06:50.497388746 +0000 UTC m=+4862.805901509" Nov 23 08:07:00 crc kubenswrapper[4988]: E1123 08:07:00.760245 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:07:02 crc kubenswrapper[4988]: I1123 08:07:02.800438 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 08:07:03 crc kubenswrapper[4988]: I1123 08:07:03.499459 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.902960 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:07:06 crc kubenswrapper[4988]: E1123 08:07:06.903813 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="extract-utilities" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.903836 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="extract-utilities" Nov 23 08:07:06 crc kubenswrapper[4988]: E1123 08:07:06.903892 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="extract-content" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.903907 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="extract-content" Nov 23 08:07:06 crc kubenswrapper[4988]: E1123 08:07:06.903939 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="registry-server" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.903951 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="registry-server" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.904156 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13f2133-1e8e-4de6-86a3-db00df46d411" containerName="registry-server" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.904796 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.907233 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:07:06 crc kubenswrapper[4988]: I1123 08:07:06.933088 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.005856 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2p7\" (UniqueName: \"kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7\") pod \"mariadb-client-1-default\" (UID: \"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a\") " pod="openstack/mariadb-client-1-default" Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.108076 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2p7\" (UniqueName: \"kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7\") pod \"mariadb-client-1-default\" (UID: \"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a\") " pod="openstack/mariadb-client-1-default" Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.129872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2p7\" (UniqueName: \"kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7\") pod \"mariadb-client-1-default\" (UID: \"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a\") " pod="openstack/mariadb-client-1-default" Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.223068 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.508519 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.512316 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:07:07 crc kubenswrapper[4988]: I1123 08:07:07.589447 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a","Type":"ContainerStarted","Data":"0a8c16e4b0af30d648b9dce951786800641d9f352f705657b3df8c30477df6a3"} Nov 23 08:07:08 crc kubenswrapper[4988]: I1123 08:07:08.602927 4988 generic.go:334] "Generic (PLEG): container finished" podID="f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" containerID="5cb142c2b497cdc6160087d126183f8f74a958aa20d65e0fb113aa74764ef61c" exitCode=0 Nov 23 08:07:08 crc kubenswrapper[4988]: I1123 08:07:08.602995 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a","Type":"ContainerDied","Data":"5cb142c2b497cdc6160087d126183f8f74a958aa20d65e0fb113aa74764ef61c"} Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.586855 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.615600 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_f5a33cc0-95b2-4b2c-b58e-efa017e92c0a/mariadb-client-1-default/0.log" Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.618878 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a","Type":"ContainerDied","Data":"0a8c16e4b0af30d648b9dce951786800641d9f352f705657b3df8c30477df6a3"} Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.618901 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8c16e4b0af30d648b9dce951786800641d9f352f705657b3df8c30477df6a3" Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.618945 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.641039 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.647439 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.666426 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2p7\" (UniqueName: \"kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7\") pod \"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a\" (UID: \"f5a33cc0-95b2-4b2c-b58e-efa017e92c0a\") " Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.674643 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7" (OuterVolumeSpecName: "kube-api-access-ql2p7") pod "f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" (UID: "f5a33cc0-95b2-4b2c-b58e-efa017e92c0a"). InnerVolumeSpecName "kube-api-access-ql2p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:10 crc kubenswrapper[4988]: I1123 08:07:10.768909 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql2p7\" (UniqueName: \"kubernetes.io/projected/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a-kube-api-access-ql2p7\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:10 crc kubenswrapper[4988]: E1123 08:07:10.945503 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.032775 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:07:11 crc kubenswrapper[4988]: E1123 08:07:11.033146 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" containerName="mariadb-client-1-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.033164 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" containerName="mariadb-client-1-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.033325 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" containerName="mariadb-client-1-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.033841 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.036493 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.040778 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.073349 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsb8z\" (UniqueName: \"kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z\") pod \"mariadb-client-2-default\" (UID: \"08648390-1625-4b20-b6af-b31b077a9cd2\") " pod="openstack/mariadb-client-2-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.175628 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsb8z\" (UniqueName: \"kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z\") pod \"mariadb-client-2-default\" (UID: \"08648390-1625-4b20-b6af-b31b077a9cd2\") " pod="openstack/mariadb-client-2-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.215856 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsb8z\" (UniqueName: \"kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z\") pod \"mariadb-client-2-default\" (UID: \"08648390-1625-4b20-b6af-b31b077a9cd2\") " pod="openstack/mariadb-client-2-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.353368 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:07:11 crc kubenswrapper[4988]: I1123 08:07:11.937897 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:07:11 crc kubenswrapper[4988]: W1123 08:07:11.941569 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08648390_1625_4b20_b6af_b31b077a9cd2.slice/crio-f2e782faf3dfc42b103d51cecaf83ac375ee313bbcebd821601b38fc04093257 WatchSource:0}: Error finding container f2e782faf3dfc42b103d51cecaf83ac375ee313bbcebd821601b38fc04093257: Status 404 returned error can't find the container with id f2e782faf3dfc42b103d51cecaf83ac375ee313bbcebd821601b38fc04093257 Nov 23 08:07:12 crc kubenswrapper[4988]: I1123 08:07:12.514006 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a33cc0-95b2-4b2c-b58e-efa017e92c0a" path="/var/lib/kubelet/pods/f5a33cc0-95b2-4b2c-b58e-efa017e92c0a/volumes" Nov 23 08:07:12 crc kubenswrapper[4988]: I1123 08:07:12.655498 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08648390-1625-4b20-b6af-b31b077a9cd2","Type":"ContainerStarted","Data":"84c5174bc13df20e41270aee78cd4a6aa78b43d74c537993af5092a71531ec6c"} Nov 23 08:07:12 crc kubenswrapper[4988]: I1123 08:07:12.655575 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08648390-1625-4b20-b6af-b31b077a9cd2","Type":"ContainerStarted","Data":"f2e782faf3dfc42b103d51cecaf83ac375ee313bbcebd821601b38fc04093257"} Nov 23 08:07:12 crc kubenswrapper[4988]: I1123 08:07:12.731872 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.731848531 podStartE2EDuration="1.731848531s" podCreationTimestamp="2025-11-23 08:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:07:12.695755607 +0000 UTC m=+4885.004268410" watchObservedRunningTime="2025-11-23 08:07:12.731848531 +0000 UTC m=+4885.040361304" Nov 23 08:07:13 crc kubenswrapper[4988]: I1123 08:07:13.673035 4988 generic.go:334] "Generic (PLEG): container finished" podID="08648390-1625-4b20-b6af-b31b077a9cd2" containerID="84c5174bc13df20e41270aee78cd4a6aa78b43d74c537993af5092a71531ec6c" exitCode=1 Nov 23 08:07:13 crc kubenswrapper[4988]: I1123 08:07:13.673115 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"08648390-1625-4b20-b6af-b31b077a9cd2","Type":"ContainerDied","Data":"84c5174bc13df20e41270aee78cd4a6aa78b43d74c537993af5092a71531ec6c"} Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.175412 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.208127 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.214942 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.343291 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsb8z\" (UniqueName: \"kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z\") pod \"08648390-1625-4b20-b6af-b31b077a9cd2\" (UID: \"08648390-1625-4b20-b6af-b31b077a9cd2\") " Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.350227 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z" (OuterVolumeSpecName: "kube-api-access-xsb8z") pod "08648390-1625-4b20-b6af-b31b077a9cd2" (UID: "08648390-1625-4b20-b6af-b31b077a9cd2"). InnerVolumeSpecName "kube-api-access-xsb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.446039 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsb8z\" (UniqueName: \"kubernetes.io/projected/08648390-1625-4b20-b6af-b31b077a9cd2-kube-api-access-xsb8z\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.699050 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2e782faf3dfc42b103d51cecaf83ac375ee313bbcebd821601b38fc04093257" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.699145 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.706977 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:07:15 crc kubenswrapper[4988]: E1123 08:07:15.707351 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08648390-1625-4b20-b6af-b31b077a9cd2" containerName="mariadb-client-2-default" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.707368 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="08648390-1625-4b20-b6af-b31b077a9cd2" containerName="mariadb-client-2-default" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.707549 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="08648390-1625-4b20-b6af-b31b077a9cd2" containerName="mariadb-client-2-default" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.708151 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.714011 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.720617 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.853916 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vxmk\" (UniqueName: \"kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk\") pod \"mariadb-client-1\" (UID: \"df5f58c0-c4a4-4f79-be75-d0a0021fc46a\") " pod="openstack/mariadb-client-1" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.955860 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vxmk\" (UniqueName: \"kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk\") pod \"mariadb-client-1\" (UID: \"df5f58c0-c4a4-4f79-be75-d0a0021fc46a\") " pod="openstack/mariadb-client-1" Nov 23 08:07:15 crc kubenswrapper[4988]: I1123 08:07:15.989549 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vxmk\" (UniqueName: \"kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk\") pod \"mariadb-client-1\" (UID: \"df5f58c0-c4a4-4f79-be75-d0a0021fc46a\") " pod="openstack/mariadb-client-1" Nov 23 08:07:16 crc kubenswrapper[4988]: I1123 08:07:16.038413 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:07:16 crc kubenswrapper[4988]: I1123 08:07:16.513342 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08648390-1625-4b20-b6af-b31b077a9cd2" path="/var/lib/kubelet/pods/08648390-1625-4b20-b6af-b31b077a9cd2/volumes" Nov 23 08:07:16 crc kubenswrapper[4988]: I1123 08:07:16.649445 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:07:16 crc kubenswrapper[4988]: W1123 08:07:16.657608 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5f58c0_c4a4_4f79_be75_d0a0021fc46a.slice/crio-4e719ebdfc768cc4c0af83ee7aa8a6e533c58aed88621b1469cbc00b340f6d45 WatchSource:0}: Error finding container 4e719ebdfc768cc4c0af83ee7aa8a6e533c58aed88621b1469cbc00b340f6d45: Status 404 returned error can't find the container with id 4e719ebdfc768cc4c0af83ee7aa8a6e533c58aed88621b1469cbc00b340f6d45 Nov 23 08:07:16 crc kubenswrapper[4988]: I1123 08:07:16.708785 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"df5f58c0-c4a4-4f79-be75-d0a0021fc46a","Type":"ContainerStarted","Data":"4e719ebdfc768cc4c0af83ee7aa8a6e533c58aed88621b1469cbc00b340f6d45"} Nov 23 08:07:17 crc kubenswrapper[4988]: I1123 08:07:17.722738 4988 generic.go:334] "Generic (PLEG): container finished" podID="df5f58c0-c4a4-4f79-be75-d0a0021fc46a" containerID="12d2c270fc23f1118064d42e3340d35af60c32015ff570069c28e4f144064e2d" exitCode=0 Nov 23 08:07:17 crc kubenswrapper[4988]: I1123 08:07:17.722807 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"df5f58c0-c4a4-4f79-be75-d0a0021fc46a","Type":"ContainerDied","Data":"12d2c270fc23f1118064d42e3340d35af60c32015ff570069c28e4f144064e2d"} Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.208944 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.230172 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vxmk\" (UniqueName: \"kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk\") pod \"df5f58c0-c4a4-4f79-be75-d0a0021fc46a\" (UID: \"df5f58c0-c4a4-4f79-be75-d0a0021fc46a\") " Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.232903 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_df5f58c0-c4a4-4f79-be75-d0a0021fc46a/mariadb-client-1/0.log" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.237398 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk" (OuterVolumeSpecName: "kube-api-access-5vxmk") pod "df5f58c0-c4a4-4f79-be75-d0a0021fc46a" (UID: "df5f58c0-c4a4-4f79-be75-d0a0021fc46a"). InnerVolumeSpecName "kube-api-access-5vxmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.270680 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.279120 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.333063 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vxmk\" (UniqueName: \"kubernetes.io/projected/df5f58c0-c4a4-4f79-be75-d0a0021fc46a-kube-api-access-5vxmk\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.755113 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e719ebdfc768cc4c0af83ee7aa8a6e533c58aed88621b1469cbc00b340f6d45" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.755520 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.767353 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:07:19 crc kubenswrapper[4988]: E1123 08:07:19.767944 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5f58c0-c4a4-4f79-be75-d0a0021fc46a" containerName="mariadb-client-1" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.767986 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5f58c0-c4a4-4f79-be75-d0a0021fc46a" containerName="mariadb-client-1" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.768368 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5f58c0-c4a4-4f79-be75-d0a0021fc46a" containerName="mariadb-client-1" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.769423 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.774612 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.781924 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:07:19 crc kubenswrapper[4988]: I1123 08:07:19.940253 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2m8g\" (UniqueName: \"kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g\") pod \"mariadb-client-4-default\" (UID: \"f4b25f5a-508f-445e-bd80-e69dcfae2ee3\") " pod="openstack/mariadb-client-4-default" Nov 23 08:07:20 crc kubenswrapper[4988]: I1123 08:07:20.042314 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2m8g\" (UniqueName: \"kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g\") pod \"mariadb-client-4-default\" (UID: \"f4b25f5a-508f-445e-bd80-e69dcfae2ee3\") " pod="openstack/mariadb-client-4-default" Nov 23 08:07:20 crc kubenswrapper[4988]: I1123 08:07:20.065578 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2m8g\" (UniqueName: \"kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g\") pod \"mariadb-client-4-default\" (UID: \"f4b25f5a-508f-445e-bd80-e69dcfae2ee3\") " pod="openstack/mariadb-client-4-default" Nov 23 08:07:20 crc kubenswrapper[4988]: I1123 08:07:20.102526 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:07:20 crc kubenswrapper[4988]: I1123 08:07:20.511018 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5f58c0-c4a4-4f79-be75-d0a0021fc46a" path="/var/lib/kubelet/pods/df5f58c0-c4a4-4f79-be75-d0a0021fc46a/volumes" Nov 23 08:07:20 crc kubenswrapper[4988]: I1123 08:07:20.645662 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:07:21 crc kubenswrapper[4988]: E1123 08:07:21.160905 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:07:21 crc kubenswrapper[4988]: W1123 08:07:21.214794 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b25f5a_508f_445e_bd80_e69dcfae2ee3.slice/crio-84ae636176cd843e05826af5646c7070f5a312593e1d029e9e344a8ebe7a2a75 WatchSource:0}: Error finding container 84ae636176cd843e05826af5646c7070f5a312593e1d029e9e344a8ebe7a2a75: Status 404 returned error can't find the container with id 84ae636176cd843e05826af5646c7070f5a312593e1d029e9e344a8ebe7a2a75 Nov 23 08:07:21 crc kubenswrapper[4988]: I1123 08:07:21.672974 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:07:21 crc kubenswrapper[4988]: I1123 08:07:21.673040 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:07:21 crc kubenswrapper[4988]: I1123 08:07:21.776143 4988 generic.go:334] "Generic (PLEG): container finished" podID="f4b25f5a-508f-445e-bd80-e69dcfae2ee3" containerID="570cd11c3a23b15d7b1299241ec08a9f373d2a742aea4cfa6cd0eaf80af4d7c1" exitCode=0 Nov 23 08:07:21 crc kubenswrapper[4988]: I1123 08:07:21.776245 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f4b25f5a-508f-445e-bd80-e69dcfae2ee3","Type":"ContainerDied","Data":"570cd11c3a23b15d7b1299241ec08a9f373d2a742aea4cfa6cd0eaf80af4d7c1"} Nov 23 08:07:21 crc kubenswrapper[4988]: I1123 08:07:21.776276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f4b25f5a-508f-445e-bd80-e69dcfae2ee3","Type":"ContainerStarted","Data":"84ae636176cd843e05826af5646c7070f5a312593e1d029e9e344a8ebe7a2a75"} Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.171486 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.190038 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_f4b25f5a-508f-445e-bd80-e69dcfae2ee3/mariadb-client-4-default/0.log" Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.199120 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2m8g\" (UniqueName: \"kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g\") pod \"f4b25f5a-508f-445e-bd80-e69dcfae2ee3\" (UID: \"f4b25f5a-508f-445e-bd80-e69dcfae2ee3\") " Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.208042 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g" (OuterVolumeSpecName: "kube-api-access-l2m8g") pod "f4b25f5a-508f-445e-bd80-e69dcfae2ee3" (UID: "f4b25f5a-508f-445e-bd80-e69dcfae2ee3"). InnerVolumeSpecName "kube-api-access-l2m8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.221963 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.228435 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.301121 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2m8g\" (UniqueName: \"kubernetes.io/projected/f4b25f5a-508f-445e-bd80-e69dcfae2ee3-kube-api-access-l2m8g\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.792640 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ae636176cd843e05826af5646c7070f5a312593e1d029e9e344a8ebe7a2a75" Nov 23 08:07:23 crc kubenswrapper[4988]: I1123 08:07:23.792737 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:07:24 crc kubenswrapper[4988]: I1123 08:07:24.512540 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b25f5a-508f-445e-bd80-e69dcfae2ee3" path="/var/lib/kubelet/pods/f4b25f5a-508f-445e-bd80-e69dcfae2ee3/volumes" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.599427 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:07:26 crc kubenswrapper[4988]: E1123 08:07:26.599794 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b25f5a-508f-445e-bd80-e69dcfae2ee3" containerName="mariadb-client-4-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.599809 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b25f5a-508f-445e-bd80-e69dcfae2ee3" containerName="mariadb-client-4-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.599985 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b25f5a-508f-445e-bd80-e69dcfae2ee3" containerName="mariadb-client-4-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.600595 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.608050 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.647176 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.754044 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8p7z\" (UniqueName: \"kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z\") pod \"mariadb-client-5-default\" (UID: \"89c2ec19-d178-4287-a04e-61b5b08bc489\") " pod="openstack/mariadb-client-5-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.855336 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8p7z\" (UniqueName: \"kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z\") pod \"mariadb-client-5-default\" (UID: \"89c2ec19-d178-4287-a04e-61b5b08bc489\") " pod="openstack/mariadb-client-5-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.886426 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8p7z\" (UniqueName: \"kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z\") pod \"mariadb-client-5-default\" (UID: \"89c2ec19-d178-4287-a04e-61b5b08bc489\") " pod="openstack/mariadb-client-5-default" Nov 23 08:07:26 crc kubenswrapper[4988]: I1123 08:07:26.940428 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:07:27 crc kubenswrapper[4988]: I1123 08:07:27.547870 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:07:27 crc kubenswrapper[4988]: I1123 08:07:27.834511 4988 generic.go:334] "Generic (PLEG): container finished" podID="89c2ec19-d178-4287-a04e-61b5b08bc489" containerID="7af7bfe1ec3a8732e1d2fa9df86088c121cf21444f2c5510c202802abd07315b" exitCode=0 Nov 23 08:07:27 crc kubenswrapper[4988]: I1123 08:07:27.834554 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"89c2ec19-d178-4287-a04e-61b5b08bc489","Type":"ContainerDied","Data":"7af7bfe1ec3a8732e1d2fa9df86088c121cf21444f2c5510c202802abd07315b"} Nov 23 08:07:27 crc kubenswrapper[4988]: I1123 08:07:27.834581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"89c2ec19-d178-4287-a04e-61b5b08bc489","Type":"ContainerStarted","Data":"4dad35a095e7f01319a8891f64c3d4fb58126f444c2f3525d80ef15012ae7c16"} Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.281440 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.309436 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_89c2ec19-d178-4287-a04e-61b5b08bc489/mariadb-client-5-default/0.log" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.338845 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.346129 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.395769 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8p7z\" (UniqueName: \"kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z\") pod \"89c2ec19-d178-4287-a04e-61b5b08bc489\" (UID: \"89c2ec19-d178-4287-a04e-61b5b08bc489\") " Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.405654 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z" (OuterVolumeSpecName: "kube-api-access-h8p7z") pod "89c2ec19-d178-4287-a04e-61b5b08bc489" (UID: "89c2ec19-d178-4287-a04e-61b5b08bc489"). InnerVolumeSpecName "kube-api-access-h8p7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.476839 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:07:29 crc kubenswrapper[4988]: E1123 08:07:29.477231 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c2ec19-d178-4287-a04e-61b5b08bc489" containerName="mariadb-client-5-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.477248 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c2ec19-d178-4287-a04e-61b5b08bc489" containerName="mariadb-client-5-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.477412 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c2ec19-d178-4287-a04e-61b5b08bc489" containerName="mariadb-client-5-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.477915 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.488612 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.497650 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8p7z\" (UniqueName: \"kubernetes.io/projected/89c2ec19-d178-4287-a04e-61b5b08bc489-kube-api-access-h8p7z\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.599676 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29xf\" (UniqueName: \"kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf\") pod \"mariadb-client-6-default\" (UID: \"d87d8420-1d1f-4ef7-86af-6a1a2713ed78\") " pod="openstack/mariadb-client-6-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.701317 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29xf\" (UniqueName: \"kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf\") pod \"mariadb-client-6-default\" (UID: \"d87d8420-1d1f-4ef7-86af-6a1a2713ed78\") " pod="openstack/mariadb-client-6-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.733463 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29xf\" (UniqueName: \"kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf\") pod \"mariadb-client-6-default\" (UID: \"d87d8420-1d1f-4ef7-86af-6a1a2713ed78\") " pod="openstack/mariadb-client-6-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.815626 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.857776 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dad35a095e7f01319a8891f64c3d4fb58126f444c2f3525d80ef15012ae7c16" Nov 23 08:07:29 crc kubenswrapper[4988]: I1123 08:07:29.857891 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:07:30 crc kubenswrapper[4988]: I1123 08:07:30.402794 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:07:30 crc kubenswrapper[4988]: W1123 08:07:30.413263 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd87d8420_1d1f_4ef7_86af_6a1a2713ed78.slice/crio-998b1cab71fcf7a6fbd97108314fd41f92bf3e76bc551d80db4f9c4406ca8379 WatchSource:0}: Error finding container 998b1cab71fcf7a6fbd97108314fd41f92bf3e76bc551d80db4f9c4406ca8379: Status 404 returned error can't find the container with id 998b1cab71fcf7a6fbd97108314fd41f92bf3e76bc551d80db4f9c4406ca8379 Nov 23 08:07:30 crc kubenswrapper[4988]: I1123 08:07:30.506218 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c2ec19-d178-4287-a04e-61b5b08bc489" path="/var/lib/kubelet/pods/89c2ec19-d178-4287-a04e-61b5b08bc489/volumes" Nov 23 08:07:30 crc kubenswrapper[4988]: I1123 08:07:30.868349 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d87d8420-1d1f-4ef7-86af-6a1a2713ed78","Type":"ContainerStarted","Data":"180ab18d48706d8a98483948eb2a4f8d4b2aadae6e02ccca9035ccc305f3481e"} Nov 23 08:07:30 crc kubenswrapper[4988]: I1123 08:07:30.868806 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d87d8420-1d1f-4ef7-86af-6a1a2713ed78","Type":"ContainerStarted","Data":"998b1cab71fcf7a6fbd97108314fd41f92bf3e76bc551d80db4f9c4406ca8379"} Nov 23 08:07:30 crc kubenswrapper[4988]: I1123 08:07:30.885043 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.8850231480000001 podStartE2EDuration="1.885023148s" podCreationTimestamp="2025-11-23 08:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:07:30.883130812 +0000 UTC m=+4903.191643615" watchObservedRunningTime="2025-11-23 08:07:30.885023148 +0000 UTC m=+4903.193535901" Nov 23 08:07:31 crc kubenswrapper[4988]: E1123 08:07:31.396871 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:07:31 crc kubenswrapper[4988]: I1123 08:07:31.875900 4988 generic.go:334] "Generic (PLEG): container finished" podID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" containerID="180ab18d48706d8a98483948eb2a4f8d4b2aadae6e02ccca9035ccc305f3481e" exitCode=1 Nov 23 08:07:31 crc kubenswrapper[4988]: I1123 08:07:31.876025 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"d87d8420-1d1f-4ef7-86af-6a1a2713ed78","Type":"ContainerDied","Data":"180ab18d48706d8a98483948eb2a4f8d4b2aadae6e02ccca9035ccc305f3481e"} Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.314267 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.352048 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.359149 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.462158 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29xf\" (UniqueName: \"kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf\") pod \"d87d8420-1d1f-4ef7-86af-6a1a2713ed78\" (UID: \"d87d8420-1d1f-4ef7-86af-6a1a2713ed78\") " Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.471162 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf" (OuterVolumeSpecName: "kube-api-access-m29xf") pod "d87d8420-1d1f-4ef7-86af-6a1a2713ed78" (UID: "d87d8420-1d1f-4ef7-86af-6a1a2713ed78"). InnerVolumeSpecName "kube-api-access-m29xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.504706 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:07:33 crc kubenswrapper[4988]: E1123 08:07:33.505380 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" containerName="mariadb-client-6-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.505398 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" containerName="mariadb-client-6-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.505723 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" containerName="mariadb-client-6-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.506400 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.522234 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.564637 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29xf\" (UniqueName: \"kubernetes.io/projected/d87d8420-1d1f-4ef7-86af-6a1a2713ed78-kube-api-access-m29xf\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.666418 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cfk\" (UniqueName: \"kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk\") pod \"mariadb-client-7-default\" (UID: \"1788e5a9-f428-425e-b685-371d0e3fc8e9\") " pod="openstack/mariadb-client-7-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.768028 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cfk\" (UniqueName: \"kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk\") pod \"mariadb-client-7-default\" (UID: \"1788e5a9-f428-425e-b685-371d0e3fc8e9\") " pod="openstack/mariadb-client-7-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.800917 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cfk\" (UniqueName: \"kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk\") pod \"mariadb-client-7-default\" (UID: \"1788e5a9-f428-425e-b685-371d0e3fc8e9\") " pod="openstack/mariadb-client-7-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.834175 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.897323 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998b1cab71fcf7a6fbd97108314fd41f92bf3e76bc551d80db4f9c4406ca8379" Nov 23 08:07:33 crc kubenswrapper[4988]: I1123 08:07:33.897414 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:07:34 crc kubenswrapper[4988]: I1123 08:07:34.438697 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:07:34 crc kubenswrapper[4988]: W1123 08:07:34.512383 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1788e5a9_f428_425e_b685_371d0e3fc8e9.slice/crio-c145c66bbd5528bf46d706688065cf7f51ffdda32930a590cf470c7de4e5fa43 WatchSource:0}: Error finding container c145c66bbd5528bf46d706688065cf7f51ffdda32930a590cf470c7de4e5fa43: Status 404 returned error can't find the container with id c145c66bbd5528bf46d706688065cf7f51ffdda32930a590cf470c7de4e5fa43 Nov 23 08:07:34 crc kubenswrapper[4988]: I1123 08:07:34.512422 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" path="/var/lib/kubelet/pods/d87d8420-1d1f-4ef7-86af-6a1a2713ed78/volumes" Nov 23 08:07:34 crc kubenswrapper[4988]: I1123 08:07:34.932864 4988 generic.go:334] "Generic (PLEG): container finished" podID="1788e5a9-f428-425e-b685-371d0e3fc8e9" containerID="efd01d02796aa6bcad44eb1189a5e7a4ab9fc4ca8a982e91a1d3af4e57dfcbe9" exitCode=0 Nov 23 08:07:34 crc kubenswrapper[4988]: I1123 08:07:34.932989 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"1788e5a9-f428-425e-b685-371d0e3fc8e9","Type":"ContainerDied","Data":"efd01d02796aa6bcad44eb1189a5e7a4ab9fc4ca8a982e91a1d3af4e57dfcbe9"} Nov 23 08:07:34 crc kubenswrapper[4988]: I1123 08:07:34.933317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"1788e5a9-f428-425e-b685-371d0e3fc8e9","Type":"ContainerStarted","Data":"c145c66bbd5528bf46d706688065cf7f51ffdda32930a590cf470c7de4e5fa43"} Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.055799 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.071349 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_1788e5a9-f428-425e-b685-371d0e3fc8e9/mariadb-client-7-default/0.log" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.097648 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.104471 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.228119 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45cfk\" (UniqueName: \"kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk\") pod \"1788e5a9-f428-425e-b685-371d0e3fc8e9\" (UID: \"1788e5a9-f428-425e-b685-371d0e3fc8e9\") " Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.237323 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:07:37 crc kubenswrapper[4988]: E1123 08:07:37.237700 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1788e5a9-f428-425e-b685-371d0e3fc8e9" containerName="mariadb-client-7-default" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.237721 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1788e5a9-f428-425e-b685-371d0e3fc8e9" containerName="mariadb-client-7-default" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.237913 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1788e5a9-f428-425e-b685-371d0e3fc8e9" containerName="mariadb-client-7-default" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.238472 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk" (OuterVolumeSpecName: "kube-api-access-45cfk") pod "1788e5a9-f428-425e-b685-371d0e3fc8e9" (UID: "1788e5a9-f428-425e-b685-371d0e3fc8e9"). InnerVolumeSpecName "kube-api-access-45cfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.238692 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.252005 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.329856 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45cfk\" (UniqueName: \"kubernetes.io/projected/1788e5a9-f428-425e-b685-371d0e3fc8e9-kube-api-access-45cfk\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.431570 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwjn\" (UniqueName: \"kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn\") pod \"mariadb-client-2\" (UID: \"98b3688a-c9bb-40af-afb8-c72d7a353c7e\") " pod="openstack/mariadb-client-2" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.533573 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwjn\" (UniqueName: \"kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn\") pod \"mariadb-client-2\" (UID: \"98b3688a-c9bb-40af-afb8-c72d7a353c7e\") " pod="openstack/mariadb-client-2" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.557528 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwjn\" (UniqueName: \"kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn\") pod \"mariadb-client-2\" (UID: \"98b3688a-c9bb-40af-afb8-c72d7a353c7e\") " pod="openstack/mariadb-client-2" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.589392 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.969030 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c145c66bbd5528bf46d706688065cf7f51ffdda32930a590cf470c7de4e5fa43" Nov 23 08:07:37 crc kubenswrapper[4988]: I1123 08:07:37.969109 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:07:38 crc kubenswrapper[4988]: I1123 08:07:38.182812 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:07:38 crc kubenswrapper[4988]: W1123 08:07:38.194438 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b3688a_c9bb_40af_afb8_c72d7a353c7e.slice/crio-a2540c3d0b5b7669278f132c144e8df51605cd47f352a12488e8cb7ccd410625 WatchSource:0}: Error finding container a2540c3d0b5b7669278f132c144e8df51605cd47f352a12488e8cb7ccd410625: Status 404 returned error can't find the container with id a2540c3d0b5b7669278f132c144e8df51605cd47f352a12488e8cb7ccd410625 Nov 23 08:07:38 crc kubenswrapper[4988]: I1123 08:07:38.513004 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1788e5a9-f428-425e-b685-371d0e3fc8e9" path="/var/lib/kubelet/pods/1788e5a9-f428-425e-b685-371d0e3fc8e9/volumes" Nov 23 08:07:38 crc kubenswrapper[4988]: I1123 08:07:38.981904 4988 generic.go:334] "Generic (PLEG): container finished" podID="98b3688a-c9bb-40af-afb8-c72d7a353c7e" containerID="871e516b6061f0f5538c5b00d2ac2ec7526778b66dde626019b603fd80eb913b" exitCode=0 Nov 23 08:07:38 crc kubenswrapper[4988]: I1123 08:07:38.982017 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"98b3688a-c9bb-40af-afb8-c72d7a353c7e","Type":"ContainerDied","Data":"871e516b6061f0f5538c5b00d2ac2ec7526778b66dde626019b603fd80eb913b"} Nov 23 08:07:38 crc kubenswrapper[4988]: I1123 08:07:38.982494 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"98b3688a-c9bb-40af-afb8-c72d7a353c7e","Type":"ContainerStarted","Data":"a2540c3d0b5b7669278f132c144e8df51605cd47f352a12488e8cb7ccd410625"} Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.466711 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.483471 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_98b3688a-c9bb-40af-afb8-c72d7a353c7e/mariadb-client-2/0.log" Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.521673 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.528940 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.584003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwjn\" (UniqueName: \"kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn\") pod \"98b3688a-c9bb-40af-afb8-c72d7a353c7e\" (UID: \"98b3688a-c9bb-40af-afb8-c72d7a353c7e\") " Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.592516 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn" (OuterVolumeSpecName: "kube-api-access-8xwjn") pod "98b3688a-c9bb-40af-afb8-c72d7a353c7e" (UID: "98b3688a-c9bb-40af-afb8-c72d7a353c7e"). InnerVolumeSpecName "kube-api-access-8xwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:07:40 crc kubenswrapper[4988]: I1123 08:07:40.689380 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwjn\" (UniqueName: \"kubernetes.io/projected/98b3688a-c9bb-40af-afb8-c72d7a353c7e-kube-api-access-8xwjn\") on node \"crc\" DevicePath \"\"" Nov 23 08:07:41 crc kubenswrapper[4988]: I1123 08:07:41.018465 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2540c3d0b5b7669278f132c144e8df51605cd47f352a12488e8cb7ccd410625" Nov 23 08:07:41 crc kubenswrapper[4988]: I1123 08:07:41.018544 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:07:41 crc kubenswrapper[4988]: E1123 08:07:41.603932 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-conmon-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc03f11b_e69c_48c7_80e9_a044721fbf1e.slice/crio-01b86ef1a1d9eda5ed865815c7216f160ed0bbbada46229945b1b53e36a92c36.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:07:42 crc kubenswrapper[4988]: I1123 08:07:42.514630 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b3688a-c9bb-40af-afb8-c72d7a353c7e" path="/var/lib/kubelet/pods/98b3688a-c9bb-40af-afb8-c72d7a353c7e/volumes" Nov 23 08:07:51 crc kubenswrapper[4988]: I1123 08:07:51.671693 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:07:51 crc kubenswrapper[4988]: I1123 08:07:51.672242 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:08:04 crc kubenswrapper[4988]: I1123 08:08:04.581783 4988 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd87d8420-1d1f-4ef7-86af-6a1a2713ed78"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd87d8420-1d1f-4ef7-86af-6a1a2713ed78] : Timed out while waiting for systemd to remove kubepods-besteffort-podd87d8420_1d1f_4ef7_86af_6a1a2713ed78.slice" Nov 23 08:08:04 crc kubenswrapper[4988]: E1123 08:08:04.582422 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podd87d8420-1d1f-4ef7-86af-6a1a2713ed78] : unable to destroy cgroup paths for cgroup [kubepods besteffort podd87d8420-1d1f-4ef7-86af-6a1a2713ed78] : Timed out while waiting for systemd to remove kubepods-besteffort-podd87d8420_1d1f_4ef7_86af_6a1a2713ed78.slice" pod="openstack/mariadb-client-6-default" podUID="d87d8420-1d1f-4ef7-86af-6a1a2713ed78" Nov 23 08:08:05 crc kubenswrapper[4988]: I1123 08:08:05.242707 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:08:21 crc kubenswrapper[4988]: I1123 08:08:21.671735 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:08:21 crc kubenswrapper[4988]: I1123 08:08:21.672311 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:08:21 crc kubenswrapper[4988]: I1123 08:08:21.672360 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:08:21 crc kubenswrapper[4988]: I1123 08:08:21.672956 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:08:21 crc kubenswrapper[4988]: I1123 08:08:21.673010 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d" gracePeriod=600 Nov 23 08:08:22 crc kubenswrapper[4988]: I1123 08:08:22.418733 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d" exitCode=0 Nov 23 08:08:22 crc kubenswrapper[4988]: I1123 08:08:22.418810 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d"} Nov 23 08:08:22 crc kubenswrapper[4988]: I1123 08:08:22.419352 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9"} Nov 23 08:08:22 crc kubenswrapper[4988]: I1123 08:08:22.419389 4988 scope.go:117] "RemoveContainer" containerID="297d384891ca800b6e25a563fbc1fe0001ec9ca8433b5785fe9dd6440955cc76" Nov 23 08:08:57 crc kubenswrapper[4988]: I1123 08:08:57.065769 4988 scope.go:117] "RemoveContainer" containerID="045f0180421ef3eb3f64508a1234709b06108b933f8e03d475ab2dd23e809491" Nov 23 08:10:21 crc kubenswrapper[4988]: I1123 08:10:21.672590 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:10:21 crc kubenswrapper[4988]: I1123 08:10:21.673457 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.792271 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:10:31 crc kubenswrapper[4988]: E1123 08:10:31.793577 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b3688a-c9bb-40af-afb8-c72d7a353c7e" containerName="mariadb-client-2" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.793613 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b3688a-c9bb-40af-afb8-c72d7a353c7e" containerName="mariadb-client-2" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.793977 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b3688a-c9bb-40af-afb8-c72d7a353c7e" containerName="mariadb-client-2" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.795160 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.798314 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.801679 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.930169 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:31 crc kubenswrapper[4988]: I1123 08:10:31.930319 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrc4k\" (UniqueName: \"kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.031219 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.031299 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrc4k\" (UniqueName: \"kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.034302 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.034344 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b228decd885019ad15522ceb49b04a9bef1fb40b723fbc6e384903744a58a2f6/globalmount\"" pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.052577 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrc4k\" (UniqueName: \"kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.071964 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") pod \"mariadb-copy-data\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.157530 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.693284 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.850610 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0bc35042-10af-472b-afcf-7408a2efc34d","Type":"ContainerStarted","Data":"d592cca7a15c44ff3fa4e07b597a2b7e33f8e10ccd6a333bb2867fcde9056d8d"} Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.850672 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0bc35042-10af-472b-afcf-7408a2efc34d","Type":"ContainerStarted","Data":"569dcbb506ea526f1fede077245c923fc1c136be7ebc52a4e33d6e7562c17c2a"} Nov 23 08:10:32 crc kubenswrapper[4988]: I1123 08:10:32.869157 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.869137172 podStartE2EDuration="2.869137172s" podCreationTimestamp="2025-11-23 08:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:10:32.866673392 +0000 UTC m=+5085.175186165" watchObservedRunningTime="2025-11-23 08:10:32.869137172 +0000 UTC m=+5085.177649935" Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.700831 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.702352 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.708880 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.793164 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrhs\" (UniqueName: \"kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs\") pod \"mariadb-client\" (UID: \"7a8b0700-c508-4d3e-9bca-38d75f7d3384\") " pod="openstack/mariadb-client" Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.895926 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrhs\" (UniqueName: \"kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs\") pod \"mariadb-client\" (UID: \"7a8b0700-c508-4d3e-9bca-38d75f7d3384\") " pod="openstack/mariadb-client" Nov 23 08:10:35 crc kubenswrapper[4988]: I1123 08:10:35.932178 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrhs\" (UniqueName: \"kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs\") pod \"mariadb-client\" (UID: \"7a8b0700-c508-4d3e-9bca-38d75f7d3384\") " pod="openstack/mariadb-client" Nov 23 08:10:36 crc kubenswrapper[4988]: I1123 08:10:36.029676 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:36 crc kubenswrapper[4988]: I1123 08:10:36.546623 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:36 crc kubenswrapper[4988]: W1123 08:10:36.547001 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a8b0700_c508_4d3e_9bca_38d75f7d3384.slice/crio-9f9395908a120ff79588cdb0f0d1fcb3bf03aacbffa486c75658d5b28ea5d828 WatchSource:0}: Error finding container 9f9395908a120ff79588cdb0f0d1fcb3bf03aacbffa486c75658d5b28ea5d828: Status 404 returned error can't find the container with id 9f9395908a120ff79588cdb0f0d1fcb3bf03aacbffa486c75658d5b28ea5d828 Nov 23 08:10:36 crc kubenswrapper[4988]: I1123 08:10:36.892965 4988 generic.go:334] "Generic (PLEG): container finished" podID="7a8b0700-c508-4d3e-9bca-38d75f7d3384" containerID="e7924113091594abf204e8e1f1f5c1ce21bdd7f11e92cc8f91660b21fef1e9d2" exitCode=0 Nov 23 08:10:36 crc kubenswrapper[4988]: I1123 08:10:36.893075 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7a8b0700-c508-4d3e-9bca-38d75f7d3384","Type":"ContainerDied","Data":"e7924113091594abf204e8e1f1f5c1ce21bdd7f11e92cc8f91660b21fef1e9d2"} Nov 23 08:10:36 crc kubenswrapper[4988]: I1123 08:10:36.893270 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"7a8b0700-c508-4d3e-9bca-38d75f7d3384","Type":"ContainerStarted","Data":"9f9395908a120ff79588cdb0f0d1fcb3bf03aacbffa486c75658d5b28ea5d828"} Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.179375 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.207303 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_7a8b0700-c508-4d3e-9bca-38d75f7d3384/mariadb-client/0.log" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.236604 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.242614 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.339442 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrhs\" (UniqueName: \"kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs\") pod \"7a8b0700-c508-4d3e-9bca-38d75f7d3384\" (UID: \"7a8b0700-c508-4d3e-9bca-38d75f7d3384\") " Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.345579 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs" (OuterVolumeSpecName: "kube-api-access-njrhs") pod "7a8b0700-c508-4d3e-9bca-38d75f7d3384" (UID: "7a8b0700-c508-4d3e-9bca-38d75f7d3384"). InnerVolumeSpecName "kube-api-access-njrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.381085 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:38 crc kubenswrapper[4988]: E1123 08:10:38.381813 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8b0700-c508-4d3e-9bca-38d75f7d3384" containerName="mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.381834 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8b0700-c508-4d3e-9bca-38d75f7d3384" containerName="mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.382000 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8b0700-c508-4d3e-9bca-38d75f7d3384" containerName="mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.382755 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.388075 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.441853 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njrhs\" (UniqueName: \"kubernetes.io/projected/7a8b0700-c508-4d3e-9bca-38d75f7d3384-kube-api-access-njrhs\") on node \"crc\" DevicePath \"\"" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.510434 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8b0700-c508-4d3e-9bca-38d75f7d3384" path="/var/lib/kubelet/pods/7a8b0700-c508-4d3e-9bca-38d75f7d3384/volumes" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.544019 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp557\" (UniqueName: \"kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557\") pod \"mariadb-client\" (UID: \"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51\") " pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.646589 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp557\" (UniqueName: \"kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557\") pod \"mariadb-client\" (UID: \"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51\") " pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.673656 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp557\" (UniqueName: \"kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557\") pod \"mariadb-client\" (UID: \"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51\") " pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.700661 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.912110 4988 scope.go:117] "RemoveContainer" containerID="e7924113091594abf204e8e1f1f5c1ce21bdd7f11e92cc8f91660b21fef1e9d2" Nov 23 08:10:38 crc kubenswrapper[4988]: I1123 08:10:38.912160 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:39 crc kubenswrapper[4988]: I1123 08:10:39.143702 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:39 crc kubenswrapper[4988]: W1123 08:10:39.149643 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03a5b75d_4ccb_4aa1_8fd5_88b78e0eec51.slice/crio-307e0c3adec840e2ea3dd0e2023b996651aa6f582fc9d6deb23837d60ec63cd2 WatchSource:0}: Error finding container 307e0c3adec840e2ea3dd0e2023b996651aa6f582fc9d6deb23837d60ec63cd2: Status 404 returned error can't find the container with id 307e0c3adec840e2ea3dd0e2023b996651aa6f582fc9d6deb23837d60ec63cd2 Nov 23 08:10:39 crc kubenswrapper[4988]: I1123 08:10:39.926613 4988 generic.go:334] "Generic (PLEG): container finished" podID="03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" containerID="77d262c74af6f5d8d3713208142e3792fc556ea9bf008c9484f153ac8278fb1c" exitCode=0 Nov 23 08:10:39 crc kubenswrapper[4988]: I1123 08:10:39.926709 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51","Type":"ContainerDied","Data":"77d262c74af6f5d8d3713208142e3792fc556ea9bf008c9484f153ac8278fb1c"} Nov 23 08:10:39 crc kubenswrapper[4988]: I1123 08:10:39.926924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51","Type":"ContainerStarted","Data":"307e0c3adec840e2ea3dd0e2023b996651aa6f582fc9d6deb23837d60ec63cd2"} Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.286544 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.291438 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp557\" (UniqueName: \"kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557\") pod \"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51\" (UID: \"03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51\") " Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.300161 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557" (OuterVolumeSpecName: "kube-api-access-qp557") pod "03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" (UID: "03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51"). InnerVolumeSpecName "kube-api-access-qp557". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.308808 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51/mariadb-client/0.log" Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.335945 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.342515 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.392984 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp557\" (UniqueName: \"kubernetes.io/projected/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51-kube-api-access-qp557\") on node \"crc\" DevicePath \"\"" Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.941639 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="307e0c3adec840e2ea3dd0e2023b996651aa6f582fc9d6deb23837d60ec63cd2" Nov 23 08:10:41 crc kubenswrapper[4988]: I1123 08:10:41.941713 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:10:42 crc kubenswrapper[4988]: I1123 08:10:42.510185 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" path="/var/lib/kubelet/pods/03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51/volumes" Nov 23 08:10:51 crc kubenswrapper[4988]: I1123 08:10:51.672822 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:10:51 crc kubenswrapper[4988]: I1123 08:10:51.673620 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.285169 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:11:17 crc kubenswrapper[4988]: E1123 08:11:17.286165 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" containerName="mariadb-client" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.286183 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" containerName="mariadb-client" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.286395 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a5b75d-4ccb-4aa1-8fd5-88b78e0eec51" containerName="mariadb-client" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.287557 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.291522 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.292687 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5kvbm" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.292903 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.292916 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.293257 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.301393 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.304826 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.320934 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321040 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321101 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321138 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxj2\" (UniqueName: \"kubernetes.io/projected/a9ae31f4-b75f-41a4-8794-b12381abe024-kube-api-access-nvxj2\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321232 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321272 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321299 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-config\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.321570 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.323109 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.332471 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.341960 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.350315 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.421972 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422267 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422394 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-config\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422489 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422493 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422710 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.422844 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.423333 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-config\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.424157 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9ae31f4-b75f-41a4-8794-b12381abe024-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.424484 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.424969 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxj2\" (UniqueName: \"kubernetes.io/projected/a9ae31f4-b75f-41a4-8794-b12381abe024-kube-api-access-nvxj2\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.426438 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.426473 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/32b84a67cda86d44bab6e6fee50851a6e52b6816ee63c50ca0162a2f72138dc9/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.428499 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.428812 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.430092 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9ae31f4-b75f-41a4-8794-b12381abe024-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.450633 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxj2\" (UniqueName: \"kubernetes.io/projected/a9ae31f4-b75f-41a4-8794-b12381abe024-kube-api-access-nvxj2\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.465291 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-475d9b66-e122-4fed-9c5d-f6a7b0aa1283\") pod \"ovsdbserver-nb-0\" (UID: \"a9ae31f4-b75f-41a4-8794-b12381abe024\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.526891 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527246 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527286 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9ktr\" (UniqueName: \"kubernetes.io/projected/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-kube-api-access-r9ktr\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527302 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-config\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527328 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527403 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527420 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527446 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527463 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-config\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527483 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527517 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527546 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527565 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527583 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1541c20e-b535-4680-bc5c-a83bec35804e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1541c20e-b535-4680-bc5c-a83bec35804e\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527598 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqtn4\" (UniqueName: \"kubernetes.io/projected/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-kube-api-access-tqtn4\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.527623 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.628803 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629177 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-config\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629249 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629328 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629365 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629396 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629417 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-config\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629455 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629576 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629599 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629621 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1541c20e-b535-4680-bc5c-a83bec35804e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1541c20e-b535-4680-bc5c-a83bec35804e\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629641 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqtn4\" (UniqueName: \"kubernetes.io/projected/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-kube-api-access-tqtn4\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.629660 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.631717 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-config\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.631912 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-config\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.632417 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.632980 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.633669 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.634887 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.634897 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.638973 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.640481 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.640686 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.643020 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.644131 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.644167 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1541c20e-b535-4680-bc5c-a83bec35804e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1541c20e-b535-4680-bc5c-a83bec35804e\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27dd330df878b60242848792cda07aba452d613994c805d7223b8a9cbabc90bc/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.644608 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.644647 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.644706 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9ktr\" (UniqueName: \"kubernetes.io/projected/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-kube-api-access-r9ktr\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.646032 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.646080 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/19ecd2c75d4d7a8063802e17efaf54af04be46f9cb0678b167772452a2a60bd3/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.647057 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.672726 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqtn4\" (UniqueName: \"kubernetes.io/projected/65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d-kube-api-access-tqtn4\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.697567 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9ktr\" (UniqueName: \"kubernetes.io/projected/ecf8ff15-93c9-45ec-a013-c3e043b01e8d-kube-api-access-r9ktr\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.709582 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1541c20e-b535-4680-bc5c-a83bec35804e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1541c20e-b535-4680-bc5c-a83bec35804e\") pod \"ovsdbserver-nb-2\" (UID: \"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.717862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef74cc1b-9b1f-460e-8561-0964b376250d\") pod \"ovsdbserver-nb-1\" (UID: \"ecf8ff15-93c9-45ec-a013-c3e043b01e8d\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.939100 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:17 crc kubenswrapper[4988]: I1123 08:11:17.951485 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.105634 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.279053 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a9ae31f4-b75f-41a4-8794-b12381abe024","Type":"ContainerStarted","Data":"8f9de050ba29f95ea4e362578cc452bb097630d62b433cbb6e847f1f0aa9ceba"} Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.554436 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.684557 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.692399 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.697492 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.702823 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4f92z" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.703053 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.702835 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.703281 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.703671 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.714717 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.732067 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.739455 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.742921 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.768543 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.879799 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.879856 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.879876 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.879897 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.879920 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880224 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880287 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880316 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880383 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-config\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880408 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vsql\" (UniqueName: \"kubernetes.io/projected/4f651a69-31ca-40dd-a065-c81f64c4e34c-kube-api-access-9vsql\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880443 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmzf\" (UniqueName: \"kubernetes.io/projected/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-kube-api-access-vxmzf\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880612 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880643 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880675 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880699 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880754 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7db\" (UniqueName: \"kubernetes.io/projected/66e01e8e-febc-4ccc-b863-3e24332ba0f9-kube-api-access-5c7db\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880856 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-config\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.880895 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.881926 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.881985 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.882008 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.882026 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-config\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983530 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxmzf\" (UniqueName: \"kubernetes.io/projected/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-kube-api-access-vxmzf\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983586 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983612 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983650 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983676 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983709 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983756 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7db\" (UniqueName: \"kubernetes.io/projected/66e01e8e-febc-4ccc-b863-3e24332ba0f9-kube-api-access-5c7db\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983791 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-config\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983847 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983876 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983909 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983944 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-config\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983974 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.983994 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984018 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984041 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984094 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984160 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984181 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984230 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984257 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-config\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984275 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsql\" (UniqueName: \"kubernetes.io/projected/4f651a69-31ca-40dd-a065-c81f64c4e34c-kube-api-access-9vsql\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.984298 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.985428 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.986575 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-config\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.987025 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.987653 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.988933 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66e01e8e-febc-4ccc-b863-3e24332ba0f9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.989931 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.990184 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-config\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.990588 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.991272 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.991299 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8395c73f6dc427ec7815fe6ed9b7ab39f47b80ad3d1dd433f79e0ca4701d50e/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.991367 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.991392 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16e8e69c64e95c263ed6cb81f494e962af1e8830044ec3c74c1fed7140ea7d34/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.991538 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.992076 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.992614 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.992631 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a7dafe1976f52736951db6e597ac24fabd55ee8be388fcea190b0fc5a2e7a0ef/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.992656 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.993040 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.995589 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f651a69-31ca-40dd-a065-c81f64c4e34c-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:18 crc kubenswrapper[4988]: I1123 08:11:18.997046 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:18.998653 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:18.999006 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f651a69-31ca-40dd-a065-c81f64c4e34c-config\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.001110 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66e01e8e-febc-4ccc-b863-3e24332ba0f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.002938 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxmzf\" (UniqueName: \"kubernetes.io/projected/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-kube-api-access-vxmzf\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.003472 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.019835 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7db\" (UniqueName: \"kubernetes.io/projected/66e01e8e-febc-4ccc-b863-3e24332ba0f9-kube-api-access-5c7db\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.020103 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsql\" (UniqueName: \"kubernetes.io/projected/4f651a69-31ca-40dd-a065-c81f64c4e34c-kube-api-access-9vsql\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.026743 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-76781ae7-fc9a-4b26-a57b-4191396f689e\") pod \"ovsdbserver-sb-1\" (UID: \"4f651a69-31ca-40dd-a065-c81f64c4e34c\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.035482 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6e0ef037-fe74-45e9-b509-73f9dd4b1e42\") pod \"ovsdbserver-sb-0\" (UID: \"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.050852 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9d0693e-89dc-4b1a-b330-17d4de3e17cc\") pod \"ovsdbserver-sb-2\" (UID: \"66e01e8e-febc-4ccc-b863-3e24332ba0f9\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.066432 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.089842 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:11:19 crc kubenswrapper[4988]: W1123 08:11:19.102838 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecf8ff15_93c9_45ec_a013_c3e043b01e8d.slice/crio-8928f8544132597e94e4b2186723f458df1e62d7676fd2696aa9c99e1e78c59f WatchSource:0}: Error finding container 8928f8544132597e94e4b2186723f458df1e62d7676fd2696aa9c99e1e78c59f: Status 404 returned error can't find the container with id 8928f8544132597e94e4b2186723f458df1e62d7676fd2696aa9c99e1e78c59f Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.288052 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ecf8ff15-93c9-45ec-a013-c3e043b01e8d","Type":"ContainerStarted","Data":"8928f8544132597e94e4b2186723f458df1e62d7676fd2696aa9c99e1e78c59f"} Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.298582 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d","Type":"ContainerStarted","Data":"166c6fca8cb63efac5018cff05a4bcda6ef427446a4773b2cadbdc7d0f782d98"} Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.332709 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.341953 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.630213 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:11:19 crc kubenswrapper[4988]: I1123 08:11:19.980027 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:11:20 crc kubenswrapper[4988]: I1123 08:11:20.310643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"66e01e8e-febc-4ccc-b863-3e24332ba0f9","Type":"ContainerStarted","Data":"f844ed39277d96544e0bcf6376e46182fc71c6390b02daa953ba813286368552"} Nov 23 08:11:20 crc kubenswrapper[4988]: I1123 08:11:20.312564 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"4f651a69-31ca-40dd-a065-c81f64c4e34c","Type":"ContainerStarted","Data":"b88acdd6d5f3b3c5038710aec3c1278d4666dc732a8e0af792fe1c5b1623637a"} Nov 23 08:11:20 crc kubenswrapper[4988]: I1123 08:11:20.636120 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:11:21 crc kubenswrapper[4988]: I1123 08:11:21.672629 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:11:21 crc kubenswrapper[4988]: I1123 08:11:21.672897 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:11:21 crc kubenswrapper[4988]: I1123 08:11:21.672945 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:11:21 crc kubenswrapper[4988]: I1123 08:11:21.673615 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:11:21 crc kubenswrapper[4988]: I1123 08:11:21.673661 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" gracePeriod=600 Nov 23 08:11:21 crc kubenswrapper[4988]: W1123 08:11:21.770174 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode13446ae_e2d8_4e82_b1fe_fa6e4fe7a938.slice/crio-8bd651c4bc4d754d423dc22b9225d998440003333552fe72032e40d16a11507b WatchSource:0}: Error finding container 8bd651c4bc4d754d423dc22b9225d998440003333552fe72032e40d16a11507b: Status 404 returned error can't find the container with id 8bd651c4bc4d754d423dc22b9225d998440003333552fe72032e40d16a11507b Nov 23 08:11:21 crc kubenswrapper[4988]: E1123 08:11:21.997136 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:11:22 crc kubenswrapper[4988]: I1123 08:11:22.327807 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938","Type":"ContainerStarted","Data":"8bd651c4bc4d754d423dc22b9225d998440003333552fe72032e40d16a11507b"} Nov 23 08:11:22 crc kubenswrapper[4988]: I1123 08:11:22.329858 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" exitCode=0 Nov 23 08:11:22 crc kubenswrapper[4988]: I1123 08:11:22.329922 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9"} Nov 23 08:11:22 crc kubenswrapper[4988]: I1123 08:11:22.330002 4988 scope.go:117] "RemoveContainer" containerID="1bb2e6bd6db24d664f8d0930235f12055689a53d4dac8186f041dc0d609d1b5d" Nov 23 08:11:22 crc kubenswrapper[4988]: I1123 08:11:22.330678 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:11:22 crc kubenswrapper[4988]: E1123 08:11:22.331128 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.341258 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d","Type":"ContainerStarted","Data":"fe54d0f5f9e356ac7dae36ce3d7ef464ffcdca5f22a764e42e5612066ecfc56f"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.341572 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d","Type":"ContainerStarted","Data":"ce8489d4abf4324fa426e4540f52221c248930d194036b7789b4376b422b2ba1"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.343883 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a9ae31f4-b75f-41a4-8794-b12381abe024","Type":"ContainerStarted","Data":"55983eeb6941419957c3b4cd2ddcec352489d0c4bb371115a368034775ecce48"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.343977 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a9ae31f4-b75f-41a4-8794-b12381abe024","Type":"ContainerStarted","Data":"3628e27de5fe3898be382daef990805ea9d37472c170b32e29a2a676d35bdb58"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.346728 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938","Type":"ContainerStarted","Data":"80d3a6abd14a5515e4bed5634cb578106343159f41d4f62f7d9e921a7deb4071"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.346770 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938","Type":"ContainerStarted","Data":"e0b212183cbf4e6c43e98a97c24e9c2a443c787b72e1a00324d4a527c3e33c0d"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.350012 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ecf8ff15-93c9-45ec-a013-c3e043b01e8d","Type":"ContainerStarted","Data":"15f17feeb7eed652f54055613291eeed2667677953d58a1c26d7d3e99629ba16"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.350087 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"ecf8ff15-93c9-45ec-a013-c3e043b01e8d","Type":"ContainerStarted","Data":"4bff4c7c69981fc4bbf796afcbdd013065cdc70e03d400014ec42c728b5df257"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.352922 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"66e01e8e-febc-4ccc-b863-3e24332ba0f9","Type":"ContainerStarted","Data":"e273c13ec80f47108ae8e1c199961c09a5b2ebca07df632ee478cc3e0b77aff9"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.352962 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"66e01e8e-febc-4ccc-b863-3e24332ba0f9","Type":"ContainerStarted","Data":"55ea43bbfa24a8016caccbc1703820a24738f6b0119460708692ea074bf08733"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.358965 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"4f651a69-31ca-40dd-a065-c81f64c4e34c","Type":"ContainerStarted","Data":"f7b5ed20176e83e19f8b549eaf06ac5482ba6ca00a893e62f305109d7ea43165"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.359015 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"4f651a69-31ca-40dd-a065-c81f64c4e34c","Type":"ContainerStarted","Data":"1e130b748be113643a5eca886679e8528c40e51d71ad212b301627765e9bc7b1"} Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.380689 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.870281362 podStartE2EDuration="7.380660333s" podCreationTimestamp="2025-11-23 08:11:16 +0000 UTC" firstStartedPulling="2025-11-23 08:11:18.566434815 +0000 UTC m=+5130.874947578" lastFinishedPulling="2025-11-23 08:11:22.076813786 +0000 UTC m=+5134.385326549" observedRunningTime="2025-11-23 08:11:23.372944944 +0000 UTC m=+5135.681457747" watchObservedRunningTime="2025-11-23 08:11:23.380660333 +0000 UTC m=+5135.689173136" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.426018 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.455706784 podStartE2EDuration="7.425993514s" podCreationTimestamp="2025-11-23 08:11:16 +0000 UTC" firstStartedPulling="2025-11-23 08:11:18.112619345 +0000 UTC m=+5130.421132108" lastFinishedPulling="2025-11-23 08:11:22.082906075 +0000 UTC m=+5134.391418838" observedRunningTime="2025-11-23 08:11:23.402518418 +0000 UTC m=+5135.711031221" watchObservedRunningTime="2025-11-23 08:11:23.425993514 +0000 UTC m=+5135.734506287" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.451180 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.363330845 podStartE2EDuration="6.45114825s" podCreationTimestamp="2025-11-23 08:11:17 +0000 UTC" firstStartedPulling="2025-11-23 08:11:20.000027731 +0000 UTC m=+5132.308540494" lastFinishedPulling="2025-11-23 08:11:22.087845146 +0000 UTC m=+5134.396357899" observedRunningTime="2025-11-23 08:11:23.434634155 +0000 UTC m=+5135.743146958" watchObservedRunningTime="2025-11-23 08:11:23.45114825 +0000 UTC m=+5135.759661063" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.464402 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.016876916 podStartE2EDuration="6.464374914s" podCreationTimestamp="2025-11-23 08:11:17 +0000 UTC" firstStartedPulling="2025-11-23 08:11:19.640338588 +0000 UTC m=+5131.948851351" lastFinishedPulling="2025-11-23 08:11:22.087836586 +0000 UTC m=+5134.396349349" observedRunningTime="2025-11-23 08:11:23.463893472 +0000 UTC m=+5135.772406245" watchObservedRunningTime="2025-11-23 08:11:23.464374914 +0000 UTC m=+5135.772887697" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.501455 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.184507416 podStartE2EDuration="6.501433242s" podCreationTimestamp="2025-11-23 08:11:17 +0000 UTC" firstStartedPulling="2025-11-23 08:11:21.772791326 +0000 UTC m=+5134.081304099" lastFinishedPulling="2025-11-23 08:11:22.089717162 +0000 UTC m=+5134.398229925" observedRunningTime="2025-11-23 08:11:23.496556513 +0000 UTC m=+5135.805069286" watchObservedRunningTime="2025-11-23 08:11:23.501433242 +0000 UTC m=+5135.809946045" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.533088 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.54769856 podStartE2EDuration="7.533059747s" podCreationTimestamp="2025-11-23 08:11:16 +0000 UTC" firstStartedPulling="2025-11-23 08:11:19.104300244 +0000 UTC m=+5131.412813007" lastFinishedPulling="2025-11-23 08:11:22.089661431 +0000 UTC m=+5134.398174194" observedRunningTime="2025-11-23 08:11:23.52501783 +0000 UTC m=+5135.833530623" watchObservedRunningTime="2025-11-23 08:11:23.533059747 +0000 UTC m=+5135.841572530" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.629515 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.940058 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:23 crc kubenswrapper[4988]: I1123 08:11:23.952578 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:24 crc kubenswrapper[4988]: I1123 08:11:24.067390 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:24 crc kubenswrapper[4988]: I1123 08:11:24.333790 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:24 crc kubenswrapper[4988]: I1123 08:11:24.342189 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.067799 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.122900 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.333316 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.342897 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.393762 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:25 crc kubenswrapper[4988]: I1123 08:11:25.412977 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:26 crc kubenswrapper[4988]: I1123 08:11:26.694329 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:26 crc kubenswrapper[4988]: I1123 08:11:26.695059 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.001308 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.001687 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.017488 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.017906 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.068481 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.082603 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.359762 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.361236 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.363163 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.370332 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.440688 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.444428 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.447948 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ctk\" (UniqueName: \"kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.448162 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.448245 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.448567 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.449936 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.549835 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9ctk\" (UniqueName: \"kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.549927 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.549963 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.550006 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.552299 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.552300 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.552531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.571064 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9ctk\" (UniqueName: \"kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk\") pod \"dnsmasq-dns-7bc74cd9c-kbr9t\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.678647 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.928314 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.963781 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.965033 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.968826 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 08:11:27 crc kubenswrapper[4988]: I1123 08:11:27.975781 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.162124 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.162182 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.162242 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.162279 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.162312 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvnq\" (UniqueName: \"kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.163754 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.263996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.264090 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvnq\" (UniqueName: \"kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.264229 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.264277 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.264351 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.265897 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.265964 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.265917 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.266340 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.283108 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvnq\" (UniqueName: \"kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq\") pod \"dnsmasq-dns-64cc488cc-88wjv\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.294257 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.404759 4988 generic.go:334] "Generic (PLEG): container finished" podID="940895c6-f326-4653-9990-c0bc0cfd9599" containerID="11b48d1804742fda7b7f213347883cff011f282e4a2b89d98d1d356c5d254d61" exitCode=0 Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.404805 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" event={"ID":"940895c6-f326-4653-9990-c0bc0cfd9599","Type":"ContainerDied","Data":"11b48d1804742fda7b7f213347883cff011f282e4a2b89d98d1d356c5d254d61"} Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.404871 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" event={"ID":"940895c6-f326-4653-9990-c0bc0cfd9599","Type":"ContainerStarted","Data":"01c60089a878dcc965095f5225d64e50bd6f2e78744034830d2c9e21b564663b"} Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.771876 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:11:28 crc kubenswrapper[4988]: W1123 08:11:28.775795 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4446799b_7495_4b41_bc36_585b07137bab.slice/crio-ed2071fb819213e44e62c01299f6a11b50293da60351b6347c5c69917305b66e WatchSource:0}: Error finding container ed2071fb819213e44e62c01299f6a11b50293da60351b6347c5c69917305b66e: Status 404 returned error can't find the container with id ed2071fb819213e44e62c01299f6a11b50293da60351b6347c5c69917305b66e Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.884364 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.981521 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc\") pod \"940895c6-f326-4653-9990-c0bc0cfd9599\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.981614 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb\") pod \"940895c6-f326-4653-9990-c0bc0cfd9599\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.981684 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config\") pod \"940895c6-f326-4653-9990-c0bc0cfd9599\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.981751 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9ctk\" (UniqueName: \"kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk\") pod \"940895c6-f326-4653-9990-c0bc0cfd9599\" (UID: \"940895c6-f326-4653-9990-c0bc0cfd9599\") " Nov 23 08:11:28 crc kubenswrapper[4988]: I1123 08:11:28.988310 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk" (OuterVolumeSpecName: "kube-api-access-n9ctk") pod "940895c6-f326-4653-9990-c0bc0cfd9599" (UID: "940895c6-f326-4653-9990-c0bc0cfd9599"). InnerVolumeSpecName "kube-api-access-n9ctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.008640 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "940895c6-f326-4653-9990-c0bc0cfd9599" (UID: "940895c6-f326-4653-9990-c0bc0cfd9599"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.011059 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "940895c6-f326-4653-9990-c0bc0cfd9599" (UID: "940895c6-f326-4653-9990-c0bc0cfd9599"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.014784 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config" (OuterVolumeSpecName: "config") pod "940895c6-f326-4653-9990-c0bc0cfd9599" (UID: "940895c6-f326-4653-9990-c0bc0cfd9599"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.084817 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.084881 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.084911 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940895c6-f326-4653-9990-c0bc0cfd9599-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.084936 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9ctk\" (UniqueName: \"kubernetes.io/projected/940895c6-f326-4653-9990-c0bc0cfd9599-kube-api-access-n9ctk\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.117611 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.415345 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" event={"ID":"940895c6-f326-4653-9990-c0bc0cfd9599","Type":"ContainerDied","Data":"01c60089a878dcc965095f5225d64e50bd6f2e78744034830d2c9e21b564663b"} Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.415396 4988 scope.go:117] "RemoveContainer" containerID="11b48d1804742fda7b7f213347883cff011f282e4a2b89d98d1d356c5d254d61" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.415467 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bc74cd9c-kbr9t" Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.417238 4988 generic.go:334] "Generic (PLEG): container finished" podID="4446799b-7495-4b41-bc36-585b07137bab" containerID="b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46" exitCode=0 Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.417277 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" event={"ID":"4446799b-7495-4b41-bc36-585b07137bab","Type":"ContainerDied","Data":"b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46"} Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.417301 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" event={"ID":"4446799b-7495-4b41-bc36-585b07137bab","Type":"ContainerStarted","Data":"ed2071fb819213e44e62c01299f6a11b50293da60351b6347c5c69917305b66e"} Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.480983 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:29 crc kubenswrapper[4988]: I1123 08:11:29.487115 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bc74cd9c-kbr9t"] Nov 23 08:11:30 crc kubenswrapper[4988]: I1123 08:11:30.434691 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" event={"ID":"4446799b-7495-4b41-bc36-585b07137bab","Type":"ContainerStarted","Data":"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc"} Nov 23 08:11:30 crc kubenswrapper[4988]: I1123 08:11:30.435274 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:30 crc kubenswrapper[4988]: I1123 08:11:30.475234 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" podStartSLOduration=3.475177232 podStartE2EDuration="3.475177232s" podCreationTimestamp="2025-11-23 08:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:11:30.468027657 +0000 UTC m=+5142.776540470" watchObservedRunningTime="2025-11-23 08:11:30.475177232 +0000 UTC m=+5142.783690025" Nov 23 08:11:30 crc kubenswrapper[4988]: I1123 08:11:30.515227 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="940895c6-f326-4653-9990-c0bc0cfd9599" path="/var/lib/kubelet/pods/940895c6-f326-4653-9990-c0bc0cfd9599/volumes" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.032075 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:11:32 crc kubenswrapper[4988]: E1123 08:11:32.033347 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940895c6-f326-4653-9990-c0bc0cfd9599" containerName="init" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.033382 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="940895c6-f326-4653-9990-c0bc0cfd9599" containerName="init" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.033794 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="940895c6-f326-4653-9990-c0bc0cfd9599" containerName="init" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.034967 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.038151 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.048439 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.227426 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.227803 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.227881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgrp\" (UniqueName: \"kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.330632 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.330858 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.330916 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fgrp\" (UniqueName: \"kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.337499 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.337555 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/73a17c6d3b435fd5c3b2a429d244380bf0968d0c981daa8d7d15961d013eda15/globalmount\"" pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.342914 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.365312 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fgrp\" (UniqueName: \"kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.415422 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") pod \"ovn-copy-data\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " pod="openstack/ovn-copy-data" Nov 23 08:11:32 crc kubenswrapper[4988]: I1123 08:11:32.672097 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 08:11:33 crc kubenswrapper[4988]: I1123 08:11:33.274312 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:11:33 crc kubenswrapper[4988]: W1123 08:11:33.276330 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc25fdd76_b138_43e4_bcb2_34ce86c53e02.slice/crio-b29a6564dd1ffcc11839b7741a7b13d87448d49b77e1c837c66e4cc9d6ab8b43 WatchSource:0}: Error finding container b29a6564dd1ffcc11839b7741a7b13d87448d49b77e1c837c66e4cc9d6ab8b43: Status 404 returned error can't find the container with id b29a6564dd1ffcc11839b7741a7b13d87448d49b77e1c837c66e4cc9d6ab8b43 Nov 23 08:11:33 crc kubenswrapper[4988]: I1123 08:11:33.470475 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c25fdd76-b138-43e4-bcb2-34ce86c53e02","Type":"ContainerStarted","Data":"b29a6564dd1ffcc11839b7741a7b13d87448d49b77e1c837c66e4cc9d6ab8b43"} Nov 23 08:11:34 crc kubenswrapper[4988]: I1123 08:11:34.494864 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c25fdd76-b138-43e4-bcb2-34ce86c53e02","Type":"ContainerStarted","Data":"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9"} Nov 23 08:11:34 crc kubenswrapper[4988]: I1123 08:11:34.519704 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.298903311 podStartE2EDuration="3.5196756s" podCreationTimestamp="2025-11-23 08:11:31 +0000 UTC" firstStartedPulling="2025-11-23 08:11:33.279714579 +0000 UTC m=+5145.588227382" lastFinishedPulling="2025-11-23 08:11:33.500486908 +0000 UTC m=+5145.808999671" observedRunningTime="2025-11-23 08:11:34.517509907 +0000 UTC m=+5146.826022710" watchObservedRunningTime="2025-11-23 08:11:34.5196756 +0000 UTC m=+5146.828188403" Nov 23 08:11:35 crc kubenswrapper[4988]: I1123 08:11:35.498134 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:11:35 crc kubenswrapper[4988]: E1123 08:11:35.498590 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:11:38 crc kubenswrapper[4988]: I1123 08:11:38.296627 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:11:38 crc kubenswrapper[4988]: I1123 08:11:38.376590 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:11:38 crc kubenswrapper[4988]: I1123 08:11:38.376947 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-97464f77-cjzc6" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="dnsmasq-dns" containerID="cri-o://59566b8bc3b63316ff5ca4ff2481134ff43cc3308e612e89ed546eddeec3159d" gracePeriod=10 Nov 23 08:11:38 crc kubenswrapper[4988]: I1123 08:11:38.532567 4988 generic.go:334] "Generic (PLEG): container finished" podID="d95ee697-07b0-42af-8d15-57661cd6de66" containerID="59566b8bc3b63316ff5ca4ff2481134ff43cc3308e612e89ed546eddeec3159d" exitCode=0 Nov 23 08:11:38 crc kubenswrapper[4988]: I1123 08:11:38.532631 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-cjzc6" event={"ID":"d95ee697-07b0-42af-8d15-57661cd6de66","Type":"ContainerDied","Data":"59566b8bc3b63316ff5ca4ff2481134ff43cc3308e612e89ed546eddeec3159d"} Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.132289 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.260037 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config\") pod \"d95ee697-07b0-42af-8d15-57661cd6de66\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.260145 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qtw\" (UniqueName: \"kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw\") pod \"d95ee697-07b0-42af-8d15-57661cd6de66\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.260218 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc\") pod \"d95ee697-07b0-42af-8d15-57661cd6de66\" (UID: \"d95ee697-07b0-42af-8d15-57661cd6de66\") " Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.266172 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw" (OuterVolumeSpecName: "kube-api-access-z5qtw") pod "d95ee697-07b0-42af-8d15-57661cd6de66" (UID: "d95ee697-07b0-42af-8d15-57661cd6de66"). InnerVolumeSpecName "kube-api-access-z5qtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.307742 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config" (OuterVolumeSpecName: "config") pod "d95ee697-07b0-42af-8d15-57661cd6de66" (UID: "d95ee697-07b0-42af-8d15-57661cd6de66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.311797 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d95ee697-07b0-42af-8d15-57661cd6de66" (UID: "d95ee697-07b0-42af-8d15-57661cd6de66"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.362462 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.362495 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95ee697-07b0-42af-8d15-57661cd6de66-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.362529 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qtw\" (UniqueName: \"kubernetes.io/projected/d95ee697-07b0-42af-8d15-57661cd6de66-kube-api-access-z5qtw\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.548234 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-cjzc6" event={"ID":"d95ee697-07b0-42af-8d15-57661cd6de66","Type":"ContainerDied","Data":"bd2df50b630c97a876c89b01daf2fa5cabb31aa4dee9bf01ba6cb88f5d3c5224"} Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.548327 4988 scope.go:117] "RemoveContainer" containerID="59566b8bc3b63316ff5ca4ff2481134ff43cc3308e612e89ed546eddeec3159d" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.548509 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-cjzc6" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.586993 4988 scope.go:117] "RemoveContainer" containerID="22b38bf2fe9724ef0c8f7ddd3f3ea51b156c053091684fc33e46897e87b3b899" Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.611429 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:11:39 crc kubenswrapper[4988]: I1123 08:11:39.623057 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-97464f77-cjzc6"] Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.267635 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:11:40 crc kubenswrapper[4988]: E1123 08:11:40.269161 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="dnsmasq-dns" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.269187 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="dnsmasq-dns" Nov 23 08:11:40 crc kubenswrapper[4988]: E1123 08:11:40.269226 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="init" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.269234 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="init" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.269466 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" containerName="dnsmasq-dns" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.270477 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.273550 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.274183 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.274631 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-fh98r" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.274892 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.299634 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381170 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-config\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381226 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381254 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxk5\" (UniqueName: \"kubernetes.io/projected/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-kube-api-access-hvxk5\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381270 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-scripts\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381299 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381730 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.381918 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483040 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483109 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-config\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxk5\" (UniqueName: \"kubernetes.io/projected/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-kube-api-access-hvxk5\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483171 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-scripts\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483187 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.483276 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.484285 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-config\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.484291 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.484472 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-scripts\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.487541 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.487609 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.497845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.501988 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxk5\" (UniqueName: \"kubernetes.io/projected/5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5-kube-api-access-hvxk5\") pod \"ovn-northd-0\" (UID: \"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5\") " pod="openstack/ovn-northd-0" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.507576 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95ee697-07b0-42af-8d15-57661cd6de66" path="/var/lib/kubelet/pods/d95ee697-07b0-42af-8d15-57661cd6de66/volumes" Nov 23 08:11:40 crc kubenswrapper[4988]: I1123 08:11:40.593548 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 08:11:41 crc kubenswrapper[4988]: I1123 08:11:41.126003 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:11:41 crc kubenswrapper[4988]: W1123 08:11:41.135263 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fbbcf78_c7e7_40af_a7d2_1e82b6ca71c5.slice/crio-7ef4cebf04534fcfcb97a6ca190cc5f9dde7db5d83a286cbe393fab594ecdd31 WatchSource:0}: Error finding container 7ef4cebf04534fcfcb97a6ca190cc5f9dde7db5d83a286cbe393fab594ecdd31: Status 404 returned error can't find the container with id 7ef4cebf04534fcfcb97a6ca190cc5f9dde7db5d83a286cbe393fab594ecdd31 Nov 23 08:11:41 crc kubenswrapper[4988]: I1123 08:11:41.577375 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5","Type":"ContainerStarted","Data":"7ef4cebf04534fcfcb97a6ca190cc5f9dde7db5d83a286cbe393fab594ecdd31"} Nov 23 08:11:42 crc kubenswrapper[4988]: I1123 08:11:42.587915 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5","Type":"ContainerStarted","Data":"ee685a03a6582e0e7132e153dbde5b18b1c34353c2942de39fa974f2f1deef2a"} Nov 23 08:11:42 crc kubenswrapper[4988]: I1123 08:11:42.589285 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5","Type":"ContainerStarted","Data":"974d341916bc97d7594aea302fff40952e1cccd67e3dec1f56e9bf3efe92d3d3"} Nov 23 08:11:42 crc kubenswrapper[4988]: I1123 08:11:42.589397 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 08:11:42 crc kubenswrapper[4988]: I1123 08:11:42.616932 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.90891294 podStartE2EDuration="2.616911308s" podCreationTimestamp="2025-11-23 08:11:40 +0000 UTC" firstStartedPulling="2025-11-23 08:11:41.139243942 +0000 UTC m=+5153.447756705" lastFinishedPulling="2025-11-23 08:11:41.84724228 +0000 UTC m=+5154.155755073" observedRunningTime="2025-11-23 08:11:42.610977472 +0000 UTC m=+5154.919490245" watchObservedRunningTime="2025-11-23 08:11:42.616911308 +0000 UTC m=+5154.925424071" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.766936 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-t4px9"] Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.768292 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.782677 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t4px9"] Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.783801 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.783987 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlc9k\" (UniqueName: \"kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.869856 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fa70-account-create-hdw5g"] Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.871050 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.874075 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.884906 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fa70-account-create-hdw5g"] Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.885671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hsgg\" (UniqueName: \"kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.885763 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlc9k\" (UniqueName: \"kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.885823 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.886056 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.887539 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.911634 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlc9k\" (UniqueName: \"kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k\") pod \"keystone-db-create-t4px9\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.988116 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.988217 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hsgg\" (UniqueName: \"kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:45 crc kubenswrapper[4988]: I1123 08:11:45.988997 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:46 crc kubenswrapper[4988]: I1123 08:11:46.008205 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hsgg\" (UniqueName: \"kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg\") pod \"keystone-fa70-account-create-hdw5g\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:46 crc kubenswrapper[4988]: I1123 08:11:46.138107 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:46 crc kubenswrapper[4988]: I1123 08:11:46.191606 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:46 crc kubenswrapper[4988]: I1123 08:11:46.625852 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t4px9"] Nov 23 08:11:46 crc kubenswrapper[4988]: W1123 08:11:46.626710 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2364b99b_fe19_46f3_abdc_d5f0cc41b80f.slice/crio-0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715 WatchSource:0}: Error finding container 0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715: Status 404 returned error can't find the container with id 0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715 Nov 23 08:11:46 crc kubenswrapper[4988]: I1123 08:11:46.727618 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fa70-account-create-hdw5g"] Nov 23 08:11:46 crc kubenswrapper[4988]: W1123 08:11:46.741580 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebd8b350_8faf_4ee1_acc5_d550e98f0a3e.slice/crio-c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257 WatchSource:0}: Error finding container c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257: Status 404 returned error can't find the container with id c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257 Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.496932 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:11:47 crc kubenswrapper[4988]: E1123 08:11:47.497767 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.652594 4988 generic.go:334] "Generic (PLEG): container finished" podID="ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" containerID="79a7db4bee8568ccf56583cd5e14f33e3659b45e6b2626bcdf43b91d71148a26" exitCode=0 Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.652741 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fa70-account-create-hdw5g" event={"ID":"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e","Type":"ContainerDied","Data":"79a7db4bee8568ccf56583cd5e14f33e3659b45e6b2626bcdf43b91d71148a26"} Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.652793 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fa70-account-create-hdw5g" event={"ID":"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e","Type":"ContainerStarted","Data":"c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257"} Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.657545 4988 generic.go:334] "Generic (PLEG): container finished" podID="2364b99b-fe19-46f3-abdc-d5f0cc41b80f" containerID="5f7f84fe227043860f32b575baf5b2b744fba16189ae429bba94df3da9925f5b" exitCode=0 Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.657624 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4px9" event={"ID":"2364b99b-fe19-46f3-abdc-d5f0cc41b80f","Type":"ContainerDied","Data":"5f7f84fe227043860f32b575baf5b2b744fba16189ae429bba94df3da9925f5b"} Nov 23 08:11:47 crc kubenswrapper[4988]: I1123 08:11:47.657672 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4px9" event={"ID":"2364b99b-fe19-46f3-abdc-d5f0cc41b80f","Type":"ContainerStarted","Data":"0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715"} Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.112320 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.121549 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.248268 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts\") pod \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.248393 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts\") pod \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.248473 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hsgg\" (UniqueName: \"kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg\") pod \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\" (UID: \"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e\") " Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.248546 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlc9k\" (UniqueName: \"kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k\") pod \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\" (UID: \"2364b99b-fe19-46f3-abdc-d5f0cc41b80f\") " Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.249320 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2364b99b-fe19-46f3-abdc-d5f0cc41b80f" (UID: "2364b99b-fe19-46f3-abdc-d5f0cc41b80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.249796 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" (UID: "ebd8b350-8faf-4ee1-acc5-d550e98f0a3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.255238 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg" (OuterVolumeSpecName: "kube-api-access-7hsgg") pod "ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" (UID: "ebd8b350-8faf-4ee1-acc5-d550e98f0a3e"). InnerVolumeSpecName "kube-api-access-7hsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.257295 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k" (OuterVolumeSpecName: "kube-api-access-dlc9k") pod "2364b99b-fe19-46f3-abdc-d5f0cc41b80f" (UID: "2364b99b-fe19-46f3-abdc-d5f0cc41b80f"). InnerVolumeSpecName "kube-api-access-dlc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.351308 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.351357 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.351398 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hsgg\" (UniqueName: \"kubernetes.io/projected/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e-kube-api-access-7hsgg\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.351414 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlc9k\" (UniqueName: \"kubernetes.io/projected/2364b99b-fe19-46f3-abdc-d5f0cc41b80f-kube-api-access-dlc9k\") on node \"crc\" DevicePath \"\"" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.674124 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t4px9" event={"ID":"2364b99b-fe19-46f3-abdc-d5f0cc41b80f","Type":"ContainerDied","Data":"0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715"} Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.674679 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0178b84d55d91eb74b0e2d040adddde605303580a79caf7b54afb6b098afd715" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.674161 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t4px9" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.683330 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fa70-account-create-hdw5g" event={"ID":"ebd8b350-8faf-4ee1-acc5-d550e98f0a3e","Type":"ContainerDied","Data":"c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257"} Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.683368 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1988421aa04c307faff139f624cc0cf8b4ecfe656d7c6cc65b8baea98f83257" Nov 23 08:11:49 crc kubenswrapper[4988]: I1123 08:11:49.683480 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fa70-account-create-hdw5g" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.373751 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-bsjwl"] Nov 23 08:11:51 crc kubenswrapper[4988]: E1123 08:11:51.374302 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2364b99b-fe19-46f3-abdc-d5f0cc41b80f" containerName="mariadb-database-create" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.374314 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2364b99b-fe19-46f3-abdc-d5f0cc41b80f" containerName="mariadb-database-create" Nov 23 08:11:51 crc kubenswrapper[4988]: E1123 08:11:51.374346 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" containerName="mariadb-account-create" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.374352 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" containerName="mariadb-account-create" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.374514 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2364b99b-fe19-46f3-abdc-d5f0cc41b80f" containerName="mariadb-database-create" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.374537 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" containerName="mariadb-account-create" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.375102 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.377817 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.378314 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.378355 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5hn2n" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.378719 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.394084 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bsjwl"] Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.487305 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.487634 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj6v\" (UniqueName: \"kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.487801 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.589254 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj6v\" (UniqueName: \"kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.589335 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.589405 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.597799 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.597939 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.621044 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj6v\" (UniqueName: \"kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v\") pod \"keystone-db-sync-bsjwl\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:51 crc kubenswrapper[4988]: I1123 08:11:51.711450 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:11:52 crc kubenswrapper[4988]: I1123 08:11:52.242624 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-bsjwl"] Nov 23 08:11:52 crc kubenswrapper[4988]: W1123 08:11:52.246428 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67755350_66e6_4df3_8736_8761deed3ea7.slice/crio-a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7 WatchSource:0}: Error finding container a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7: Status 404 returned error can't find the container with id a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7 Nov 23 08:11:52 crc kubenswrapper[4988]: I1123 08:11:52.715988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bsjwl" event={"ID":"67755350-66e6-4df3-8736-8761deed3ea7","Type":"ContainerStarted","Data":"a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7"} Nov 23 08:11:55 crc kubenswrapper[4988]: I1123 08:11:55.688237 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 08:11:57 crc kubenswrapper[4988]: I1123 08:11:57.777322 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bsjwl" event={"ID":"67755350-66e6-4df3-8736-8761deed3ea7","Type":"ContainerStarted","Data":"302a76d92ffad81b3acbda152c6e1fa4a725fa84b10a6937e62d6a3cd427b913"} Nov 23 08:11:57 crc kubenswrapper[4988]: I1123 08:11:57.795855 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-bsjwl" podStartSLOduration=1.99329574 podStartE2EDuration="6.795837781s" podCreationTimestamp="2025-11-23 08:11:51 +0000 UTC" firstStartedPulling="2025-11-23 08:11:52.249619668 +0000 UTC m=+5164.558132431" lastFinishedPulling="2025-11-23 08:11:57.052161709 +0000 UTC m=+5169.360674472" observedRunningTime="2025-11-23 08:11:57.794630591 +0000 UTC m=+5170.103143374" watchObservedRunningTime="2025-11-23 08:11:57.795837781 +0000 UTC m=+5170.104350554" Nov 23 08:11:59 crc kubenswrapper[4988]: I1123 08:11:59.797094 4988 generic.go:334] "Generic (PLEG): container finished" podID="67755350-66e6-4df3-8736-8761deed3ea7" containerID="302a76d92ffad81b3acbda152c6e1fa4a725fa84b10a6937e62d6a3cd427b913" exitCode=0 Nov 23 08:11:59 crc kubenswrapper[4988]: I1123 08:11:59.797260 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bsjwl" event={"ID":"67755350-66e6-4df3-8736-8761deed3ea7","Type":"ContainerDied","Data":"302a76d92ffad81b3acbda152c6e1fa4a725fa84b10a6937e62d6a3cd427b913"} Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.225214 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.385933 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle\") pod \"67755350-66e6-4df3-8736-8761deed3ea7\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.386090 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data\") pod \"67755350-66e6-4df3-8736-8761deed3ea7\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.386357 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xj6v\" (UniqueName: \"kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v\") pod \"67755350-66e6-4df3-8736-8761deed3ea7\" (UID: \"67755350-66e6-4df3-8736-8761deed3ea7\") " Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.392004 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v" (OuterVolumeSpecName: "kube-api-access-5xj6v") pod "67755350-66e6-4df3-8736-8761deed3ea7" (UID: "67755350-66e6-4df3-8736-8761deed3ea7"). InnerVolumeSpecName "kube-api-access-5xj6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.432523 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67755350-66e6-4df3-8736-8761deed3ea7" (UID: "67755350-66e6-4df3-8736-8761deed3ea7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.432577 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data" (OuterVolumeSpecName: "config-data") pod "67755350-66e6-4df3-8736-8761deed3ea7" (UID: "67755350-66e6-4df3-8736-8761deed3ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.488113 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.488159 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xj6v\" (UniqueName: \"kubernetes.io/projected/67755350-66e6-4df3-8736-8761deed3ea7-kube-api-access-5xj6v\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.488170 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67755350-66e6-4df3-8736-8761deed3ea7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.828024 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-bsjwl" event={"ID":"67755350-66e6-4df3-8736-8761deed3ea7","Type":"ContainerDied","Data":"a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7"} Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.828101 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5afb9181b66cf5ceaa2fc01094e794a1012ea4afc8644da549dfd021e15d4f7" Nov 23 08:12:01 crc kubenswrapper[4988]: I1123 08:12:01.828251 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-bsjwl" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.069096 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:12:02 crc kubenswrapper[4988]: E1123 08:12:02.069441 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67755350-66e6-4df3-8736-8761deed3ea7" containerName="keystone-db-sync" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.069464 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="67755350-66e6-4df3-8736-8761deed3ea7" containerName="keystone-db-sync" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.069682 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="67755350-66e6-4df3-8736-8761deed3ea7" containerName="keystone-db-sync" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.071667 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107370 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107583 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107632 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107705 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.107857 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgct\" (UniqueName: \"kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.137713 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hw4gg"] Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.142247 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.144423 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.144755 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.144797 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.148256 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5hn2n" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.148441 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.167724 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hw4gg"] Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210235 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210301 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgct\" (UniqueName: \"kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210335 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210412 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210448 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4cz\" (UniqueName: \"kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210485 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210502 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210555 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210573 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.210591 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.211412 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.211601 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.213269 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.213752 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.241357 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgct\" (UniqueName: \"kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct\") pod \"dnsmasq-dns-8688b44c97-kwvpk\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312761 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312825 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312894 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4cz\" (UniqueName: \"kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312922 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312943 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.312978 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.315758 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.318489 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.318530 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.322664 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.336870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.349821 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4cz\" (UniqueName: \"kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz\") pod \"keystone-bootstrap-hw4gg\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.409463 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.460220 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.497917 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:12:02 crc kubenswrapper[4988]: E1123 08:12:02.498429 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.850709 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:12:02 crc kubenswrapper[4988]: W1123 08:12:02.855973 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e5cfc66_4b4d_476c_a704_0dc6de0f724f.slice/crio-e937d1e0724ce08418bf008bc7ee8d62e6c27067913e9bf8bd09b60fe9aa869b WatchSource:0}: Error finding container e937d1e0724ce08418bf008bc7ee8d62e6c27067913e9bf8bd09b60fe9aa869b: Status 404 returned error can't find the container with id e937d1e0724ce08418bf008bc7ee8d62e6c27067913e9bf8bd09b60fe9aa869b Nov 23 08:12:02 crc kubenswrapper[4988]: I1123 08:12:02.917836 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hw4gg"] Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.844960 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hw4gg" event={"ID":"9e6fac22-ee17-49ca-9561-c84535484f57","Type":"ContainerStarted","Data":"95cad2a2d9c87dc6bb41fd491f8bfa6d9cc609fd42994e1efed96272c4fa1751"} Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.845632 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hw4gg" event={"ID":"9e6fac22-ee17-49ca-9561-c84535484f57","Type":"ContainerStarted","Data":"d0869111ffea26182f4c62709ac93b8dbc92a3d5776faa4c416ca2af1ad9ecd4"} Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.846907 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerID="7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e" exitCode=0 Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.846944 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" event={"ID":"5e5cfc66-4b4d-476c-a704-0dc6de0f724f","Type":"ContainerDied","Data":"7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e"} Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.846964 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" event={"ID":"5e5cfc66-4b4d-476c-a704-0dc6de0f724f","Type":"ContainerStarted","Data":"e937d1e0724ce08418bf008bc7ee8d62e6c27067913e9bf8bd09b60fe9aa869b"} Nov 23 08:12:03 crc kubenswrapper[4988]: I1123 08:12:03.923004 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hw4gg" podStartSLOduration=1.922979899 podStartE2EDuration="1.922979899s" podCreationTimestamp="2025-11-23 08:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:12:03.889256553 +0000 UTC m=+5176.197769396" watchObservedRunningTime="2025-11-23 08:12:03.922979899 +0000 UTC m=+5176.231492682" Nov 23 08:12:04 crc kubenswrapper[4988]: I1123 08:12:04.867541 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" event={"ID":"5e5cfc66-4b4d-476c-a704-0dc6de0f724f","Type":"ContainerStarted","Data":"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29"} Nov 23 08:12:04 crc kubenswrapper[4988]: I1123 08:12:04.898171 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" podStartSLOduration=2.898152462 podStartE2EDuration="2.898152462s" podCreationTimestamp="2025-11-23 08:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:12:04.897528817 +0000 UTC m=+5177.206041580" watchObservedRunningTime="2025-11-23 08:12:04.898152462 +0000 UTC m=+5177.206665235" Nov 23 08:12:05 crc kubenswrapper[4988]: I1123 08:12:05.875730 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:06 crc kubenswrapper[4988]: I1123 08:12:06.891752 4988 generic.go:334] "Generic (PLEG): container finished" podID="9e6fac22-ee17-49ca-9561-c84535484f57" containerID="95cad2a2d9c87dc6bb41fd491f8bfa6d9cc609fd42994e1efed96272c4fa1751" exitCode=0 Nov 23 08:12:06 crc kubenswrapper[4988]: I1123 08:12:06.891906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hw4gg" event={"ID":"9e6fac22-ee17-49ca-9561-c84535484f57","Type":"ContainerDied","Data":"95cad2a2d9c87dc6bb41fd491f8bfa6d9cc609fd42994e1efed96272c4fa1751"} Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.267376 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.348818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.348878 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.348981 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq4cz\" (UniqueName: \"kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.349088 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.349125 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.349214 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys\") pod \"9e6fac22-ee17-49ca-9561-c84535484f57\" (UID: \"9e6fac22-ee17-49ca-9561-c84535484f57\") " Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.354576 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz" (OuterVolumeSpecName: "kube-api-access-lq4cz") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "kube-api-access-lq4cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.354662 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.355543 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts" (OuterVolumeSpecName: "scripts") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.356262 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.378456 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data" (OuterVolumeSpecName: "config-data") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.390119 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e6fac22-ee17-49ca-9561-c84535484f57" (UID: "9e6fac22-ee17-49ca-9561-c84535484f57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451683 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq4cz\" (UniqueName: \"kubernetes.io/projected/9e6fac22-ee17-49ca-9561-c84535484f57-kube-api-access-lq4cz\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451750 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451803 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451825 4988 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451843 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.451865 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e6fac22-ee17-49ca-9561-c84535484f57-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.913888 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hw4gg" event={"ID":"9e6fac22-ee17-49ca-9561-c84535484f57","Type":"ContainerDied","Data":"d0869111ffea26182f4c62709ac93b8dbc92a3d5776faa4c416ca2af1ad9ecd4"} Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.914272 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0869111ffea26182f4c62709ac93b8dbc92a3d5776faa4c416ca2af1ad9ecd4" Nov 23 08:12:08 crc kubenswrapper[4988]: I1123 08:12:08.914015 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hw4gg" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.026703 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hw4gg"] Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.031967 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hw4gg"] Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.113514 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nvkdt"] Nov 23 08:12:09 crc kubenswrapper[4988]: E1123 08:12:09.114103 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6fac22-ee17-49ca-9561-c84535484f57" containerName="keystone-bootstrap" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.114210 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6fac22-ee17-49ca-9561-c84535484f57" containerName="keystone-bootstrap" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.114557 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6fac22-ee17-49ca-9561-c84535484f57" containerName="keystone-bootstrap" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.115389 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.119673 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.119958 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.119964 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.120986 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.122049 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5hn2n" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.125992 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nvkdt"] Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163709 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163822 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163877 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163929 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpg4\" (UniqueName: \"kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163951 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.163999 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265145 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265270 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265333 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265366 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265401 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpg4\" (UniqueName: \"kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.265422 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.269566 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.270424 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.270803 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.273781 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.282741 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.286802 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpg4\" (UniqueName: \"kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4\") pod \"keystone-bootstrap-nvkdt\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.438980 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.896458 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nvkdt"] Nov 23 08:12:09 crc kubenswrapper[4988]: I1123 08:12:09.928805 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvkdt" event={"ID":"efe7b079-462b-435a-8066-4a37543eef94","Type":"ContainerStarted","Data":"9d4d4f02d5ec900b015515bf779495cc7afc66a5ea62eb1cd6022c711bc01df3"} Nov 23 08:12:10 crc kubenswrapper[4988]: I1123 08:12:10.504516 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6fac22-ee17-49ca-9561-c84535484f57" path="/var/lib/kubelet/pods/9e6fac22-ee17-49ca-9561-c84535484f57/volumes" Nov 23 08:12:10 crc kubenswrapper[4988]: I1123 08:12:10.936065 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvkdt" event={"ID":"efe7b079-462b-435a-8066-4a37543eef94","Type":"ContainerStarted","Data":"89aa7a47a5fc30e7d9488bc83e7168d50b29ec721bebb12457f6a3cee56081da"} Nov 23 08:12:10 crc kubenswrapper[4988]: I1123 08:12:10.965039 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nvkdt" podStartSLOduration=1.965013452 podStartE2EDuration="1.965013452s" podCreationTimestamp="2025-11-23 08:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:12:10.952469305 +0000 UTC m=+5183.260982078" watchObservedRunningTime="2025-11-23 08:12:10.965013452 +0000 UTC m=+5183.273526245" Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.411469 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.481289 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.482231 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="dnsmasq-dns" containerID="cri-o://bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc" gracePeriod=10 Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.948342 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.954469 4988 generic.go:334] "Generic (PLEG): container finished" podID="efe7b079-462b-435a-8066-4a37543eef94" containerID="89aa7a47a5fc30e7d9488bc83e7168d50b29ec721bebb12457f6a3cee56081da" exitCode=0 Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.954561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvkdt" event={"ID":"efe7b079-462b-435a-8066-4a37543eef94","Type":"ContainerDied","Data":"89aa7a47a5fc30e7d9488bc83e7168d50b29ec721bebb12457f6a3cee56081da"} Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.956462 4988 generic.go:334] "Generic (PLEG): container finished" podID="4446799b-7495-4b41-bc36-585b07137bab" containerID="bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc" exitCode=0 Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.956501 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" event={"ID":"4446799b-7495-4b41-bc36-585b07137bab","Type":"ContainerDied","Data":"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc"} Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.956523 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" event={"ID":"4446799b-7495-4b41-bc36-585b07137bab","Type":"ContainerDied","Data":"ed2071fb819213e44e62c01299f6a11b50293da60351b6347c5c69917305b66e"} Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.956545 4988 scope.go:117] "RemoveContainer" containerID="bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc" Nov 23 08:12:12 crc kubenswrapper[4988]: I1123 08:12:12.956714 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64cc488cc-88wjv" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.009971 4988 scope.go:117] "RemoveContainer" containerID="b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.034268 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb\") pod \"4446799b-7495-4b41-bc36-585b07137bab\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.034350 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc\") pod \"4446799b-7495-4b41-bc36-585b07137bab\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.034491 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb\") pod \"4446799b-7495-4b41-bc36-585b07137bab\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.034523 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvnq\" (UniqueName: \"kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq\") pod \"4446799b-7495-4b41-bc36-585b07137bab\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.034549 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config\") pod \"4446799b-7495-4b41-bc36-585b07137bab\" (UID: \"4446799b-7495-4b41-bc36-585b07137bab\") " Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.039670 4988 scope.go:117] "RemoveContainer" containerID="bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc" Nov 23 08:12:13 crc kubenswrapper[4988]: E1123 08:12:13.042862 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc\": container with ID starting with bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc not found: ID does not exist" containerID="bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.042932 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc"} err="failed to get container status \"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc\": rpc error: code = NotFound desc = could not find container \"bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc\": container with ID starting with bb1df7f46591963e57bf3df53524e5dce1759fb4e3e5122a240f78518f8de6fc not found: ID does not exist" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.042987 4988 scope.go:117] "RemoveContainer" containerID="b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46" Nov 23 08:12:13 crc kubenswrapper[4988]: E1123 08:12:13.043621 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46\": container with ID starting with b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46 not found: ID does not exist" containerID="b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.043684 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46"} err="failed to get container status \"b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46\": rpc error: code = NotFound desc = could not find container \"b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46\": container with ID starting with b653c661757e5b81d5e749fb50296177d5e14736d2a8a60680f0822427129b46 not found: ID does not exist" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.044273 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq" (OuterVolumeSpecName: "kube-api-access-klvnq") pod "4446799b-7495-4b41-bc36-585b07137bab" (UID: "4446799b-7495-4b41-bc36-585b07137bab"). InnerVolumeSpecName "kube-api-access-klvnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.087041 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config" (OuterVolumeSpecName: "config") pod "4446799b-7495-4b41-bc36-585b07137bab" (UID: "4446799b-7495-4b41-bc36-585b07137bab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.097287 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4446799b-7495-4b41-bc36-585b07137bab" (UID: "4446799b-7495-4b41-bc36-585b07137bab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.106899 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4446799b-7495-4b41-bc36-585b07137bab" (UID: "4446799b-7495-4b41-bc36-585b07137bab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.109032 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4446799b-7495-4b41-bc36-585b07137bab" (UID: "4446799b-7495-4b41-bc36-585b07137bab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.136247 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.136290 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvnq\" (UniqueName: \"kubernetes.io/projected/4446799b-7495-4b41-bc36-585b07137bab-kube-api-access-klvnq\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.136302 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.136312 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.136321 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4446799b-7495-4b41-bc36-585b07137bab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.297254 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:12:13 crc kubenswrapper[4988]: I1123 08:12:13.303143 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64cc488cc-88wjv"] Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.407260 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.456746 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnpg4\" (UniqueName: \"kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.456851 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.456941 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.456958 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.457003 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.457022 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data\") pod \"efe7b079-462b-435a-8066-4a37543eef94\" (UID: \"efe7b079-462b-435a-8066-4a37543eef94\") " Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.465951 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.465980 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.467662 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4" (OuterVolumeSpecName: "kube-api-access-nnpg4") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "kube-api-access-nnpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.477601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts" (OuterVolumeSpecName: "scripts") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.482848 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.489479 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data" (OuterVolumeSpecName: "config-data") pod "efe7b079-462b-435a-8066-4a37543eef94" (UID: "efe7b079-462b-435a-8066-4a37543eef94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.497549 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:12:14 crc kubenswrapper[4988]: E1123 08:12:14.498117 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.509482 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4446799b-7495-4b41-bc36-585b07137bab" path="/var/lib/kubelet/pods/4446799b-7495-4b41-bc36-585b07137bab/volumes" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.558837 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.559011 4988 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.559069 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.559121 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.559172 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnpg4\" (UniqueName: \"kubernetes.io/projected/efe7b079-462b-435a-8066-4a37543eef94-kube-api-access-nnpg4\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.559240 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/efe7b079-462b-435a-8066-4a37543eef94-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.984553 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nvkdt" event={"ID":"efe7b079-462b-435a-8066-4a37543eef94","Type":"ContainerDied","Data":"9d4d4f02d5ec900b015515bf779495cc7afc66a5ea62eb1cd6022c711bc01df3"} Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.984607 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4d4f02d5ec900b015515bf779495cc7afc66a5ea62eb1cd6022c711bc01df3" Nov 23 08:12:14 crc kubenswrapper[4988]: I1123 08:12:14.984687 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nvkdt" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.093857 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77c6bb66bc-7wqr8"] Nov 23 08:12:15 crc kubenswrapper[4988]: E1123 08:12:15.094597 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe7b079-462b-435a-8066-4a37543eef94" containerName="keystone-bootstrap" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.094615 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe7b079-462b-435a-8066-4a37543eef94" containerName="keystone-bootstrap" Nov 23 08:12:15 crc kubenswrapper[4988]: E1123 08:12:15.094640 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="dnsmasq-dns" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.094648 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="dnsmasq-dns" Nov 23 08:12:15 crc kubenswrapper[4988]: E1123 08:12:15.094657 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="init" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.094665 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="init" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.094868 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe7b079-462b-435a-8066-4a37543eef94" containerName="keystone-bootstrap" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.094886 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4446799b-7495-4b41-bc36-585b07137bab" containerName="dnsmasq-dns" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.095550 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.098039 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.098649 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.100280 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.100455 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.100504 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5hn2n" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.100645 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.120765 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c6bb66bc-7wqr8"] Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.168799 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-internal-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.168982 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm68s\" (UniqueName: \"kubernetes.io/projected/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-kube-api-access-sm68s\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169045 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-public-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-credential-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169110 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-combined-ca-bundle\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169212 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-scripts\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169358 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-config-data\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.169379 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-fernet-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271436 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-internal-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271541 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm68s\" (UniqueName: \"kubernetes.io/projected/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-kube-api-access-sm68s\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271593 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-public-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271610 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-credential-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271638 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-combined-ca-bundle\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271655 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-scripts\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271695 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-config-data\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.271709 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-fernet-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.276822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-credential-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.278451 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-internal-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.281981 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-combined-ca-bundle\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.282298 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-public-tls-certs\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.282458 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-config-data\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.283787 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-fernet-keys\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.288013 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm68s\" (UniqueName: \"kubernetes.io/projected/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-kube-api-access-sm68s\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.293138 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f-scripts\") pod \"keystone-77c6bb66bc-7wqr8\" (UID: \"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f\") " pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.412886 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.897120 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c6bb66bc-7wqr8"] Nov 23 08:12:15 crc kubenswrapper[4988]: I1123 08:12:15.999299 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c6bb66bc-7wqr8" event={"ID":"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f","Type":"ContainerStarted","Data":"6a816db3ea83209e7cf699d22f7079d5b58273fec57eb3f806ce3f4bfba60811"} Nov 23 08:12:17 crc kubenswrapper[4988]: I1123 08:12:17.011009 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c6bb66bc-7wqr8" event={"ID":"ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f","Type":"ContainerStarted","Data":"059fed08a0d540effdac79c4617c67022a5a110ebd4b9fd807098b99ee7e4240"} Nov 23 08:12:17 crc kubenswrapper[4988]: I1123 08:12:17.011491 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:17 crc kubenswrapper[4988]: I1123 08:12:17.031818 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77c6bb66bc-7wqr8" podStartSLOduration=2.03180411 podStartE2EDuration="2.03180411s" podCreationTimestamp="2025-11-23 08:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:12:17.029503334 +0000 UTC m=+5189.338016187" watchObservedRunningTime="2025-11-23 08:12:17.03180411 +0000 UTC m=+5189.340316873" Nov 23 08:12:28 crc kubenswrapper[4988]: I1123 08:12:28.507734 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:12:28 crc kubenswrapper[4988]: E1123 08:12:28.509560 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:12:39 crc kubenswrapper[4988]: I1123 08:12:39.496243 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:12:39 crc kubenswrapper[4988]: E1123 08:12:39.497277 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:12:46 crc kubenswrapper[4988]: I1123 08:12:46.847597 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77c6bb66bc-7wqr8" Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.934575 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.936364 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.942230 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-smwrb" Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.942528 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.943562 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 08:12:50 crc kubenswrapper[4988]: I1123 08:12:50.951477 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.049548 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.049690 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7fnv\" (UniqueName: \"kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.049948 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.050007 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.152002 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7fnv\" (UniqueName: \"kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.152097 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.152118 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.152150 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.152902 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.158145 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.160191 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.176177 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7fnv\" (UniqueName: \"kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv\") pod \"openstackclient\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.314061 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.497571 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:12:51 crc kubenswrapper[4988]: E1123 08:12:51.498641 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.778913 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:12:51 crc kubenswrapper[4988]: W1123 08:12:51.793816 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5adb072b_636b_4019_8dfe_44f8e8b27439.slice/crio-1ecc198d34ffc06276854bebfc67ea3e37bc6ea0da063b4f6d864cd87b651b4e WatchSource:0}: Error finding container 1ecc198d34ffc06276854bebfc67ea3e37bc6ea0da063b4f6d864cd87b651b4e: Status 404 returned error can't find the container with id 1ecc198d34ffc06276854bebfc67ea3e37bc6ea0da063b4f6d864cd87b651b4e Nov 23 08:12:51 crc kubenswrapper[4988]: I1123 08:12:51.797438 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:12:52 crc kubenswrapper[4988]: I1123 08:12:52.380564 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5adb072b-636b-4019-8dfe-44f8e8b27439","Type":"ContainerStarted","Data":"1ecc198d34ffc06276854bebfc67ea3e37bc6ea0da063b4f6d864cd87b651b4e"} Nov 23 08:13:02 crc kubenswrapper[4988]: I1123 08:13:02.496457 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:13:02 crc kubenswrapper[4988]: E1123 08:13:02.497314 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:13:03 crc kubenswrapper[4988]: I1123 08:13:03.480322 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5adb072b-636b-4019-8dfe-44f8e8b27439","Type":"ContainerStarted","Data":"76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045"} Nov 23 08:13:03 crc kubenswrapper[4988]: I1123 08:13:03.501260 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.8101346659999997 podStartE2EDuration="13.501244968s" podCreationTimestamp="2025-11-23 08:12:50 +0000 UTC" firstStartedPulling="2025-11-23 08:12:51.796796937 +0000 UTC m=+5224.105309700" lastFinishedPulling="2025-11-23 08:13:02.487907229 +0000 UTC m=+5234.796420002" observedRunningTime="2025-11-23 08:13:03.500362267 +0000 UTC m=+5235.808875030" watchObservedRunningTime="2025-11-23 08:13:03.501244968 +0000 UTC m=+5235.809757731" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.268671 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.270804 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.297325 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.453458 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwqr\" (UniqueName: \"kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.453803 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.453846 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.556475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwqr\" (UniqueName: \"kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.556655 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.556689 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.557397 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.557477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.588236 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwqr\" (UniqueName: \"kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr\") pod \"redhat-operators-5kfgz\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:13 crc kubenswrapper[4988]: I1123 08:13:13.600105 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:14 crc kubenswrapper[4988]: I1123 08:13:14.060836 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:14 crc kubenswrapper[4988]: I1123 08:13:14.583145 4988 generic.go:334] "Generic (PLEG): container finished" podID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerID="90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751" exitCode=0 Nov 23 08:13:14 crc kubenswrapper[4988]: I1123 08:13:14.583259 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerDied","Data":"90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751"} Nov 23 08:13:14 crc kubenswrapper[4988]: I1123 08:13:14.583301 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerStarted","Data":"06805b215cb83f47867717718c685eb5fa3f0c9fb1528ea4e8724d414cb248e0"} Nov 23 08:13:15 crc kubenswrapper[4988]: I1123 08:13:15.596292 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerStarted","Data":"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6"} Nov 23 08:13:16 crc kubenswrapper[4988]: I1123 08:13:16.609839 4988 generic.go:334] "Generic (PLEG): container finished" podID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerID="ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6" exitCode=0 Nov 23 08:13:16 crc kubenswrapper[4988]: I1123 08:13:16.610806 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerDied","Data":"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6"} Nov 23 08:13:17 crc kubenswrapper[4988]: I1123 08:13:17.496439 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:13:17 crc kubenswrapper[4988]: E1123 08:13:17.496955 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:13:17 crc kubenswrapper[4988]: I1123 08:13:17.626136 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerStarted","Data":"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88"} Nov 23 08:13:17 crc kubenswrapper[4988]: I1123 08:13:17.650661 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5kfgz" podStartSLOduration=2.07060675 podStartE2EDuration="4.650641066s" podCreationTimestamp="2025-11-23 08:13:13 +0000 UTC" firstStartedPulling="2025-11-23 08:13:14.585929724 +0000 UTC m=+5246.894442517" lastFinishedPulling="2025-11-23 08:13:17.16596404 +0000 UTC m=+5249.474476833" observedRunningTime="2025-11-23 08:13:17.648500613 +0000 UTC m=+5249.957013386" watchObservedRunningTime="2025-11-23 08:13:17.650641066 +0000 UTC m=+5249.959153829" Nov 23 08:13:23 crc kubenswrapper[4988]: I1123 08:13:23.600600 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:23 crc kubenswrapper[4988]: I1123 08:13:23.601258 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:24 crc kubenswrapper[4988]: I1123 08:13:24.662555 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5kfgz" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="registry-server" probeResult="failure" output=< Nov 23 08:13:24 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:13:24 crc kubenswrapper[4988]: > Nov 23 08:13:31 crc kubenswrapper[4988]: I1123 08:13:31.496261 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:13:31 crc kubenswrapper[4988]: E1123 08:13:31.497821 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:13:33 crc kubenswrapper[4988]: I1123 08:13:33.680345 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:33 crc kubenswrapper[4988]: I1123 08:13:33.751635 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:33 crc kubenswrapper[4988]: I1123 08:13:33.914475 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:34 crc kubenswrapper[4988]: I1123 08:13:34.812275 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5kfgz" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="registry-server" containerID="cri-o://95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88" gracePeriod=2 Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.310982 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.485057 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content\") pod \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.485228 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities\") pod \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.485401 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqwqr\" (UniqueName: \"kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr\") pod \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\" (UID: \"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a\") " Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.486667 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities" (OuterVolumeSpecName: "utilities") pod "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" (UID: "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.487523 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.493700 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr" (OuterVolumeSpecName: "kube-api-access-pqwqr") pod "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" (UID: "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a"). InnerVolumeSpecName "kube-api-access-pqwqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.589563 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqwqr\" (UniqueName: \"kubernetes.io/projected/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-kube-api-access-pqwqr\") on node \"crc\" DevicePath \"\"" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.624496 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" (UID: "6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.691268 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.825463 4988 generic.go:334] "Generic (PLEG): container finished" podID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerID="95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88" exitCode=0 Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.825508 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerDied","Data":"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88"} Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.825555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kfgz" event={"ID":"6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a","Type":"ContainerDied","Data":"06805b215cb83f47867717718c685eb5fa3f0c9fb1528ea4e8724d414cb248e0"} Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.825575 4988 scope.go:117] "RemoveContainer" containerID="95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.825581 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kfgz" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.847874 4988 scope.go:117] "RemoveContainer" containerID="ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.878316 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.882702 4988 scope.go:117] "RemoveContainer" containerID="90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.885777 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5kfgz"] Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.934458 4988 scope.go:117] "RemoveContainer" containerID="95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88" Nov 23 08:13:35 crc kubenswrapper[4988]: E1123 08:13:35.935020 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88\": container with ID starting with 95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88 not found: ID does not exist" containerID="95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.935093 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88"} err="failed to get container status \"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88\": rpc error: code = NotFound desc = could not find container \"95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88\": container with ID starting with 95b18708ae4da5d7ca9258c83946b54d4ff4f602e6bf86afa302c7e1eb50bb88 not found: ID does not exist" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.935136 4988 scope.go:117] "RemoveContainer" containerID="ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6" Nov 23 08:13:35 crc kubenswrapper[4988]: E1123 08:13:35.935717 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6\": container with ID starting with ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6 not found: ID does not exist" containerID="ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.935778 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6"} err="failed to get container status \"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6\": rpc error: code = NotFound desc = could not find container \"ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6\": container with ID starting with ed5f3bcdf14aec586b32fdcfcb57a314850d9df230c317ef4abdc81d84302ab6 not found: ID does not exist" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.935805 4988 scope.go:117] "RemoveContainer" containerID="90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751" Nov 23 08:13:35 crc kubenswrapper[4988]: E1123 08:13:35.936306 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751\": container with ID starting with 90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751 not found: ID does not exist" containerID="90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751" Nov 23 08:13:35 crc kubenswrapper[4988]: I1123 08:13:35.936373 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751"} err="failed to get container status \"90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751\": rpc error: code = NotFound desc = could not find container \"90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751\": container with ID starting with 90bc2cae177252278efa72b844828309f058d5709454300ecbe888bcb1217751 not found: ID does not exist" Nov 23 08:13:36 crc kubenswrapper[4988]: I1123 08:13:36.513174 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" path="/var/lib/kubelet/pods/6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a/volumes" Nov 23 08:13:43 crc kubenswrapper[4988]: I1123 08:13:43.496077 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:13:43 crc kubenswrapper[4988]: E1123 08:13:43.497052 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:13:54 crc kubenswrapper[4988]: E1123 08:13:54.441822 4988 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.176:60206->38.102.83.176:33549: write tcp 38.102.83.176:60206->38.102.83.176:33549: write: broken pipe Nov 23 08:13:56 crc kubenswrapper[4988]: I1123 08:13:56.497168 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:13:56 crc kubenswrapper[4988]: E1123 08:13:56.498314 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.275001 4988 scope.go:117] "RemoveContainer" containerID="180ab18d48706d8a98483948eb2a4f8d4b2aadae6e02ccca9035ccc305f3481e" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.305804 4988 scope.go:117] "RemoveContainer" containerID="570cd11c3a23b15d7b1299241ec08a9f373d2a742aea4cfa6cd0eaf80af4d7c1" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.369889 4988 scope.go:117] "RemoveContainer" containerID="12d2c270fc23f1118064d42e3340d35af60c32015ff570069c28e4f144064e2d" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.403449 4988 scope.go:117] "RemoveContainer" containerID="84c5174bc13df20e41270aee78cd4a6aa78b43d74c537993af5092a71531ec6c" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.440807 4988 scope.go:117] "RemoveContainer" containerID="efd01d02796aa6bcad44eb1189a5e7a4ab9fc4ca8a982e91a1d3af4e57dfcbe9" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.497982 4988 scope.go:117] "RemoveContainer" containerID="871e516b6061f0f5538c5b00d2ac2ec7526778b66dde626019b603fd80eb913b" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.518641 4988 scope.go:117] "RemoveContainer" containerID="5cb142c2b497cdc6160087d126183f8f74a958aa20d65e0fb113aa74764ef61c" Nov 23 08:13:57 crc kubenswrapper[4988]: I1123 08:13:57.537836 4988 scope.go:117] "RemoveContainer" containerID="7af7bfe1ec3a8732e1d2fa9df86088c121cf21444f2c5510c202802abd07315b" Nov 23 08:14:07 crc kubenswrapper[4988]: I1123 08:14:07.496696 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:14:07 crc kubenswrapper[4988]: E1123 08:14:07.497406 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.793894 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:12 crc kubenswrapper[4988]: E1123 08:14:12.794827 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="extract-utilities" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.794850 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="extract-utilities" Nov 23 08:14:12 crc kubenswrapper[4988]: E1123 08:14:12.794886 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="registry-server" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.794900 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="registry-server" Nov 23 08:14:12 crc kubenswrapper[4988]: E1123 08:14:12.794934 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="extract-content" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.794948 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="extract-content" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.795304 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd732ba-eb67-4ad3-bf1b-3f273b5ddb9a" containerName="registry-server" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.797631 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.803829 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.985954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lw2w\" (UniqueName: \"kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.986188 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:12 crc kubenswrapper[4988]: I1123 08:14:12.986390 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.088343 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lw2w\" (UniqueName: \"kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.088427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.088468 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.089043 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.089182 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.120663 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lw2w\" (UniqueName: \"kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w\") pod \"redhat-marketplace-spdrt\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.141482 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:13 crc kubenswrapper[4988]: I1123 08:14:13.635628 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.228419 4988 generic.go:334] "Generic (PLEG): container finished" podID="368f549f-264d-47c2-beab-89143127ed57" containerID="5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a" exitCode=0 Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.228491 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerDied","Data":"5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a"} Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.228543 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerStarted","Data":"e3d01305e7fd40eee32efe52d0dfbe7b91b4a2e83e74792fa1621c19cbf68316"} Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.969689 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.972105 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:14 crc kubenswrapper[4988]: I1123 08:14:14.981290 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.128369 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.128477 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.128871 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jnc4\" (UniqueName: \"kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.230176 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.230301 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.230415 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jnc4\" (UniqueName: \"kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.231329 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.231614 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.240299 4988 generic.go:334] "Generic (PLEG): container finished" podID="368f549f-264d-47c2-beab-89143127ed57" containerID="58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e" exitCode=0 Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.240345 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerDied","Data":"58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e"} Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.269009 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jnc4\" (UniqueName: \"kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4\") pod \"certified-operators-57dpb\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.289987 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:15 crc kubenswrapper[4988]: I1123 08:14:15.763415 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:15 crc kubenswrapper[4988]: W1123 08:14:15.771975 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc9512b_7113_46f2_be7a_a68a09def2bc.slice/crio-96cf0917e260d3d3fb8d2d76545108568b87ba7479e96f0fc297ad8b06bb955b WatchSource:0}: Error finding container 96cf0917e260d3d3fb8d2d76545108568b87ba7479e96f0fc297ad8b06bb955b: Status 404 returned error can't find the container with id 96cf0917e260d3d3fb8d2d76545108568b87ba7479e96f0fc297ad8b06bb955b Nov 23 08:14:16 crc kubenswrapper[4988]: I1123 08:14:16.251810 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerStarted","Data":"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c"} Nov 23 08:14:16 crc kubenswrapper[4988]: I1123 08:14:16.253848 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerID="b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338" exitCode=0 Nov 23 08:14:16 crc kubenswrapper[4988]: I1123 08:14:16.253913 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerDied","Data":"b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338"} Nov 23 08:14:16 crc kubenswrapper[4988]: I1123 08:14:16.253952 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerStarted","Data":"96cf0917e260d3d3fb8d2d76545108568b87ba7479e96f0fc297ad8b06bb955b"} Nov 23 08:14:16 crc kubenswrapper[4988]: I1123 08:14:16.290587 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-spdrt" podStartSLOduration=2.846007562 podStartE2EDuration="4.290562787s" podCreationTimestamp="2025-11-23 08:14:12 +0000 UTC" firstStartedPulling="2025-11-23 08:14:14.230670225 +0000 UTC m=+5306.539182988" lastFinishedPulling="2025-11-23 08:14:15.67522545 +0000 UTC m=+5307.983738213" observedRunningTime="2025-11-23 08:14:16.278531272 +0000 UTC m=+5308.587044055" watchObservedRunningTime="2025-11-23 08:14:16.290562787 +0000 UTC m=+5308.599075550" Nov 23 08:14:17 crc kubenswrapper[4988]: I1123 08:14:17.270172 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerID="4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb" exitCode=0 Nov 23 08:14:17 crc kubenswrapper[4988]: I1123 08:14:17.270309 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerDied","Data":"4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb"} Nov 23 08:14:18 crc kubenswrapper[4988]: I1123 08:14:18.283578 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerStarted","Data":"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce"} Nov 23 08:14:18 crc kubenswrapper[4988]: I1123 08:14:18.310550 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57dpb" podStartSLOduration=2.876882442 podStartE2EDuration="4.310523489s" podCreationTimestamp="2025-11-23 08:14:14 +0000 UTC" firstStartedPulling="2025-11-23 08:14:16.255314243 +0000 UTC m=+5308.563827046" lastFinishedPulling="2025-11-23 08:14:17.68895532 +0000 UTC m=+5309.997468093" observedRunningTime="2025-11-23 08:14:18.304922952 +0000 UTC m=+5310.613435755" watchObservedRunningTime="2025-11-23 08:14:18.310523489 +0000 UTC m=+5310.619036272" Nov 23 08:14:22 crc kubenswrapper[4988]: I1123 08:14:22.496398 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:14:22 crc kubenswrapper[4988]: E1123 08:14:22.496923 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:14:23 crc kubenswrapper[4988]: I1123 08:14:23.142496 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:23 crc kubenswrapper[4988]: I1123 08:14:23.142546 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:23 crc kubenswrapper[4988]: I1123 08:14:23.207677 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:23 crc kubenswrapper[4988]: I1123 08:14:23.452709 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:23 crc kubenswrapper[4988]: I1123 08:14:23.506361 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.526585 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-n8phq"] Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.527543 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.534117 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n8phq"] Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.636559 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7bfb-account-create-8plcz"] Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.637749 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.642910 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.650139 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7bfb-account-create-8plcz"] Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.711808 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6t5\" (UniqueName: \"kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.711873 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.711914 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.711935 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29f2q\" (UniqueName: \"kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.813755 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz6t5\" (UniqueName: \"kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.813836 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.813873 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.813898 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29f2q\" (UniqueName: \"kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.814670 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.814938 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.835101 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29f2q\" (UniqueName: \"kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q\") pod \"barbican-7bfb-account-create-8plcz\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.844301 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz6t5\" (UniqueName: \"kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5\") pod \"barbican-db-create-n8phq\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.893841 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:24 crc kubenswrapper[4988]: I1123 08:14:24.958976 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.290742 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.291016 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.368869 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.369536 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-spdrt" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="registry-server" containerID="cri-o://162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c" gracePeriod=2 Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.369980 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n8phq"] Nov 23 08:14:25 crc kubenswrapper[4988]: W1123 08:14:25.387427 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9496c06f_ea1b_4017_8b27_3d9bec1df46d.slice/crio-da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01 WatchSource:0}: Error finding container da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01: Status 404 returned error can't find the container with id da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01 Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.432455 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.452986 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7bfb-account-create-8plcz"] Nov 23 08:14:25 crc kubenswrapper[4988]: W1123 08:14:25.497620 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe853168_6e13_40b0_9a8a_89399141ab0e.slice/crio-c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e WatchSource:0}: Error finding container c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e: Status 404 returned error can't find the container with id c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.745128 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.849483 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.931919 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lw2w\" (UniqueName: \"kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w\") pod \"368f549f-264d-47c2-beab-89143127ed57\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.932094 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content\") pod \"368f549f-264d-47c2-beab-89143127ed57\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.932268 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities\") pod \"368f549f-264d-47c2-beab-89143127ed57\" (UID: \"368f549f-264d-47c2-beab-89143127ed57\") " Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.935130 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities" (OuterVolumeSpecName: "utilities") pod "368f549f-264d-47c2-beab-89143127ed57" (UID: "368f549f-264d-47c2-beab-89143127ed57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.941272 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w" (OuterVolumeSpecName: "kube-api-access-8lw2w") pod "368f549f-264d-47c2-beab-89143127ed57" (UID: "368f549f-264d-47c2-beab-89143127ed57"). InnerVolumeSpecName "kube-api-access-8lw2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:25 crc kubenswrapper[4988]: I1123 08:14:25.951750 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "368f549f-264d-47c2-beab-89143127ed57" (UID: "368f549f-264d-47c2-beab-89143127ed57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.034708 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lw2w\" (UniqueName: \"kubernetes.io/projected/368f549f-264d-47c2-beab-89143127ed57-kube-api-access-8lw2w\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.034762 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.034781 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/368f549f-264d-47c2-beab-89143127ed57-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.384101 4988 generic.go:334] "Generic (PLEG): container finished" podID="368f549f-264d-47c2-beab-89143127ed57" containerID="162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c" exitCode=0 Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.384286 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-spdrt" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.384276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerDied","Data":"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.384505 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-spdrt" event={"ID":"368f549f-264d-47c2-beab-89143127ed57","Type":"ContainerDied","Data":"e3d01305e7fd40eee32efe52d0dfbe7b91b4a2e83e74792fa1621c19cbf68316"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.384566 4988 scope.go:117] "RemoveContainer" containerID="162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.386925 4988 generic.go:334] "Generic (PLEG): container finished" podID="fe853168-6e13-40b0-9a8a-89399141ab0e" containerID="bc5fe75236f0abe1781cfe573d40b440305386e23c2de47904233c346adb618a" exitCode=0 Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.387030 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bfb-account-create-8plcz" event={"ID":"fe853168-6e13-40b0-9a8a-89399141ab0e","Type":"ContainerDied","Data":"bc5fe75236f0abe1781cfe573d40b440305386e23c2de47904233c346adb618a"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.387075 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bfb-account-create-8plcz" event={"ID":"fe853168-6e13-40b0-9a8a-89399141ab0e","Type":"ContainerStarted","Data":"c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.389434 4988 generic.go:334] "Generic (PLEG): container finished" podID="9496c06f-ea1b-4017-8b27-3d9bec1df46d" containerID="91588a3597f6fb9f0f977e21834218a47a86aac3060d366815a90962157174d7" exitCode=0 Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.390572 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n8phq" event={"ID":"9496c06f-ea1b-4017-8b27-3d9bec1df46d","Type":"ContainerDied","Data":"91588a3597f6fb9f0f977e21834218a47a86aac3060d366815a90962157174d7"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.390633 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n8phq" event={"ID":"9496c06f-ea1b-4017-8b27-3d9bec1df46d","Type":"ContainerStarted","Data":"da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01"} Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.438683 4988 scope.go:117] "RemoveContainer" containerID="58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.475328 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.482283 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-spdrt"] Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.482450 4988 scope.go:117] "RemoveContainer" containerID="5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.510215 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="368f549f-264d-47c2-beab-89143127ed57" path="/var/lib/kubelet/pods/368f549f-264d-47c2-beab-89143127ed57/volumes" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.543881 4988 scope.go:117] "RemoveContainer" containerID="162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c" Nov 23 08:14:26 crc kubenswrapper[4988]: E1123 08:14:26.544442 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c\": container with ID starting with 162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c not found: ID does not exist" containerID="162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.544526 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c"} err="failed to get container status \"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c\": rpc error: code = NotFound desc = could not find container \"162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c\": container with ID starting with 162ed5d1a82e44d54ddeadf2eb1bb04e507e9e07a9e1c36698b3b4e63b2e5f7c not found: ID does not exist" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.544572 4988 scope.go:117] "RemoveContainer" containerID="58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e" Nov 23 08:14:26 crc kubenswrapper[4988]: E1123 08:14:26.545641 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e\": container with ID starting with 58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e not found: ID does not exist" containerID="58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.545713 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e"} err="failed to get container status \"58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e\": rpc error: code = NotFound desc = could not find container \"58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e\": container with ID starting with 58826515789ac1bae4ff1d90ecda7a5a98cd9b4ecb2d2b7339390bc1ef796a3e not found: ID does not exist" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.545757 4988 scope.go:117] "RemoveContainer" containerID="5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a" Nov 23 08:14:26 crc kubenswrapper[4988]: E1123 08:14:26.546180 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a\": container with ID starting with 5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a not found: ID does not exist" containerID="5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a" Nov 23 08:14:26 crc kubenswrapper[4988]: I1123 08:14:26.546319 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a"} err="failed to get container status \"5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a\": rpc error: code = NotFound desc = could not find container \"5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a\": container with ID starting with 5563cba342423b7c4fdfeed8d073a602a384a6f7033fffe10eb536640896474a not found: ID does not exist" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.402292 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-57dpb" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="registry-server" containerID="cri-o://b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce" gracePeriod=2 Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.768354 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.773171 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.867504 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz6t5\" (UniqueName: \"kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5\") pod \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.867748 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts\") pod \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\" (UID: \"9496c06f-ea1b-4017-8b27-3d9bec1df46d\") " Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.867861 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts\") pod \"fe853168-6e13-40b0-9a8a-89399141ab0e\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.867990 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29f2q\" (UniqueName: \"kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q\") pod \"fe853168-6e13-40b0-9a8a-89399141ab0e\" (UID: \"fe853168-6e13-40b0-9a8a-89399141ab0e\") " Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.868723 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe853168-6e13-40b0-9a8a-89399141ab0e" (UID: "fe853168-6e13-40b0-9a8a-89399141ab0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.868762 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9496c06f-ea1b-4017-8b27-3d9bec1df46d" (UID: "9496c06f-ea1b-4017-8b27-3d9bec1df46d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.868981 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9496c06f-ea1b-4017-8b27-3d9bec1df46d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.869087 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe853168-6e13-40b0-9a8a-89399141ab0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.871888 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q" (OuterVolumeSpecName: "kube-api-access-29f2q") pod "fe853168-6e13-40b0-9a8a-89399141ab0e" (UID: "fe853168-6e13-40b0-9a8a-89399141ab0e"). InnerVolumeSpecName "kube-api-access-29f2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.872025 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5" (OuterVolumeSpecName: "kube-api-access-hz6t5") pod "9496c06f-ea1b-4017-8b27-3d9bec1df46d" (UID: "9496c06f-ea1b-4017-8b27-3d9bec1df46d"). InnerVolumeSpecName "kube-api-access-hz6t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.970714 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29f2q\" (UniqueName: \"kubernetes.io/projected/fe853168-6e13-40b0-9a8a-89399141ab0e-kube-api-access-29f2q\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:27 crc kubenswrapper[4988]: I1123 08:14:27.970770 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz6t5\" (UniqueName: \"kubernetes.io/projected/9496c06f-ea1b-4017-8b27-3d9bec1df46d-kube-api-access-hz6t5\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.404977 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.412774 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerID="b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce" exitCode=0 Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.412848 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerDied","Data":"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce"} Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.412862 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57dpb" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.412884 4988 scope.go:117] "RemoveContainer" containerID="b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.412871 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57dpb" event={"ID":"0fc9512b-7113-46f2-be7a-a68a09def2bc","Type":"ContainerDied","Data":"96cf0917e260d3d3fb8d2d76545108568b87ba7479e96f0fc297ad8b06bb955b"} Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.414798 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7bfb-account-create-8plcz" event={"ID":"fe853168-6e13-40b0-9a8a-89399141ab0e","Type":"ContainerDied","Data":"c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e"} Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.414828 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c83524310845edb7ac67860e8378f21c449e85b5f9725f9726fdb19664ed0b5e" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.414828 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7bfb-account-create-8plcz" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.420976 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n8phq" event={"ID":"9496c06f-ea1b-4017-8b27-3d9bec1df46d","Type":"ContainerDied","Data":"da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01"} Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.421016 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da753116f11589042877bd2769e19aa5e579533648c233895d8f5c0a5401bc01" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.421032 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n8phq" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.460431 4988 scope.go:117] "RemoveContainer" containerID="4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.485989 4988 scope.go:117] "RemoveContainer" containerID="b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.501743 4988 scope.go:117] "RemoveContainer" containerID="b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce" Nov 23 08:14:28 crc kubenswrapper[4988]: E1123 08:14:28.502159 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce\": container with ID starting with b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce not found: ID does not exist" containerID="b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.502234 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce"} err="failed to get container status \"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce\": rpc error: code = NotFound desc = could not find container \"b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce\": container with ID starting with b6c9e5c4a0a3bd9ab090f81f3f1a76d76802aa6c248b7626121a43879e5aa8ce not found: ID does not exist" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.502266 4988 scope.go:117] "RemoveContainer" containerID="4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb" Nov 23 08:14:28 crc kubenswrapper[4988]: E1123 08:14:28.502615 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb\": container with ID starting with 4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb not found: ID does not exist" containerID="4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.502692 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb"} err="failed to get container status \"4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb\": rpc error: code = NotFound desc = could not find container \"4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb\": container with ID starting with 4192efa65d56849ffd3d38333e3e0b35c64bc7365565dc6e8278e21ebfb195cb not found: ID does not exist" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.502747 4988 scope.go:117] "RemoveContainer" containerID="b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338" Nov 23 08:14:28 crc kubenswrapper[4988]: E1123 08:14:28.503256 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338\": container with ID starting with b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338 not found: ID does not exist" containerID="b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.503301 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338"} err="failed to get container status \"b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338\": rpc error: code = NotFound desc = could not find container \"b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338\": container with ID starting with b1d2ffef5c07acd2bf4a1ee4eee617a701a98ed24da3e9e7c9532b19bd0cd338 not found: ID does not exist" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.581663 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities\") pod \"0fc9512b-7113-46f2-be7a-a68a09def2bc\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.581912 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jnc4\" (UniqueName: \"kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4\") pod \"0fc9512b-7113-46f2-be7a-a68a09def2bc\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.581976 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content\") pod \"0fc9512b-7113-46f2-be7a-a68a09def2bc\" (UID: \"0fc9512b-7113-46f2-be7a-a68a09def2bc\") " Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.582619 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities" (OuterVolumeSpecName: "utilities") pod "0fc9512b-7113-46f2-be7a-a68a09def2bc" (UID: "0fc9512b-7113-46f2-be7a-a68a09def2bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.589088 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4" (OuterVolumeSpecName: "kube-api-access-5jnc4") pod "0fc9512b-7113-46f2-be7a-a68a09def2bc" (UID: "0fc9512b-7113-46f2-be7a-a68a09def2bc"). InnerVolumeSpecName "kube-api-access-5jnc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.662936 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fc9512b-7113-46f2-be7a-a68a09def2bc" (UID: "0fc9512b-7113-46f2-be7a-a68a09def2bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.683998 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jnc4\" (UniqueName: \"kubernetes.io/projected/0fc9512b-7113-46f2-be7a-a68a09def2bc-kube-api-access-5jnc4\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.684045 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.684060 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc9512b-7113-46f2-be7a-a68a09def2bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.774014 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:28 crc kubenswrapper[4988]: I1123 08:14:28.788270 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57dpb"] Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907086 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-j8ww7"] Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907866 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe853168-6e13-40b0-9a8a-89399141ab0e" containerName="mariadb-account-create" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907883 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe853168-6e13-40b0-9a8a-89399141ab0e" containerName="mariadb-account-create" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907895 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="extract-content" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907903 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="extract-content" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907920 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907927 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907938 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="extract-content" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907945 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="extract-content" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907971 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="extract-utilities" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.907979 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="extract-utilities" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.907995 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9496c06f-ea1b-4017-8b27-3d9bec1df46d" containerName="mariadb-database-create" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908002 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9496c06f-ea1b-4017-8b27-3d9bec1df46d" containerName="mariadb-database-create" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.908017 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908026 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: E1123 08:14:29.908036 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="extract-utilities" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908043 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="extract-utilities" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908258 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="368f549f-264d-47c2-beab-89143127ed57" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908276 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9496c06f-ea1b-4017-8b27-3d9bec1df46d" containerName="mariadb-database-create" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908295 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe853168-6e13-40b0-9a8a-89399141ab0e" containerName="mariadb-account-create" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908304 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" containerName="registry-server" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.908996 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.911146 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.911182 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qjkgt" Nov 23 08:14:29 crc kubenswrapper[4988]: I1123 08:14:29.923942 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j8ww7"] Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.006007 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.006074 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.006127 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhds6\" (UniqueName: \"kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.107886 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.107977 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.108031 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhds6\" (UniqueName: \"kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.113852 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.117429 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.132367 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhds6\" (UniqueName: \"kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6\") pod \"barbican-db-sync-j8ww7\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.230956 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.514213 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc9512b-7113-46f2-be7a-a68a09def2bc" path="/var/lib/kubelet/pods/0fc9512b-7113-46f2-be7a-a68a09def2bc/volumes" Nov 23 08:14:30 crc kubenswrapper[4988]: I1123 08:14:30.520564 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j8ww7"] Nov 23 08:14:30 crc kubenswrapper[4988]: W1123 08:14:30.527083 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9d07ad8_a6d6_4130_bc1e_b5c845e7e9ec.slice/crio-f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e WatchSource:0}: Error finding container f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e: Status 404 returned error can't find the container with id f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e Nov 23 08:14:31 crc kubenswrapper[4988]: I1123 08:14:31.455749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j8ww7" event={"ID":"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec","Type":"ContainerStarted","Data":"f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e"} Nov 23 08:14:34 crc kubenswrapper[4988]: I1123 08:14:34.496776 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:14:34 crc kubenswrapper[4988]: E1123 08:14:34.497474 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:14:36 crc kubenswrapper[4988]: I1123 08:14:36.505865 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j8ww7" event={"ID":"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec","Type":"ContainerStarted","Data":"f09db2d873d3009e0aaa3e744ba0a402cd8d5356e08c3f169765db376b60646d"} Nov 23 08:14:36 crc kubenswrapper[4988]: I1123 08:14:36.517316 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-j8ww7" podStartSLOduration=1.8339102779999998 podStartE2EDuration="7.517298191s" podCreationTimestamp="2025-11-23 08:14:29 +0000 UTC" firstStartedPulling="2025-11-23 08:14:30.534374138 +0000 UTC m=+5322.842886911" lastFinishedPulling="2025-11-23 08:14:36.217762061 +0000 UTC m=+5328.526274824" observedRunningTime="2025-11-23 08:14:36.514521543 +0000 UTC m=+5328.823034306" watchObservedRunningTime="2025-11-23 08:14:36.517298191 +0000 UTC m=+5328.825810954" Nov 23 08:14:38 crc kubenswrapper[4988]: I1123 08:14:38.523668 4988 generic.go:334] "Generic (PLEG): container finished" podID="c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" containerID="f09db2d873d3009e0aaa3e744ba0a402cd8d5356e08c3f169765db376b60646d" exitCode=0 Nov 23 08:14:38 crc kubenswrapper[4988]: I1123 08:14:38.523804 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j8ww7" event={"ID":"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec","Type":"ContainerDied","Data":"f09db2d873d3009e0aaa3e744ba0a402cd8d5356e08c3f169765db376b60646d"} Nov 23 08:14:39 crc kubenswrapper[4988]: I1123 08:14:39.931629 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.092269 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data\") pod \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.092383 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhds6\" (UniqueName: \"kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6\") pod \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.092488 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle\") pod \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\" (UID: \"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec\") " Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.098536 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6" (OuterVolumeSpecName: "kube-api-access-bhds6") pod "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" (UID: "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec"). InnerVolumeSpecName "kube-api-access-bhds6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.099626 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" (UID: "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.115497 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" (UID: "c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.194350 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.194576 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.194652 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhds6\" (UniqueName: \"kubernetes.io/projected/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec-kube-api-access-bhds6\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.543097 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j8ww7" event={"ID":"c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec","Type":"ContainerDied","Data":"f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e"} Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.543156 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86be1a2b38e8e628b43e47970c5eb5cefdc673ee3edee22b36ab65642f27e3e" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.543477 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j8ww7" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.854032 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6654778d7f-p8j7f"] Nov 23 08:14:40 crc kubenswrapper[4988]: E1123 08:14:40.854582 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" containerName="barbican-db-sync" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.854603 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" containerName="barbican-db-sync" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.854825 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" containerName="barbican-db-sync" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.860129 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.866698 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.867350 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qjkgt" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.876013 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.908785 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6654778d7f-p8j7f"] Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.929907 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57b89bfb9d-qwlsk"] Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.938602 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:40 crc kubenswrapper[4988]: I1123 08:14:40.949886 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018520 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff9e262-d8ae-457d-add2-26dc18b4e376-logs\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018555 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-combined-ca-bundle\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018579 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-logs\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018634 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018661 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018677 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data-custom\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018694 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cws4z\" (UniqueName: \"kubernetes.io/projected/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-kube-api-access-cws4z\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018714 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stsmj\" (UniqueName: \"kubernetes.io/projected/3ff9e262-d8ae-457d-add2-26dc18b4e376-kube-api-access-stsmj\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018736 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data-custom\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.018761 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-combined-ca-bundle\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.021259 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57b89bfb9d-qwlsk"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.088259 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.089645 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.136258 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137720 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data-custom\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137772 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-combined-ca-bundle\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137818 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137849 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwkm\" (UniqueName: \"kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137916 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137942 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff9e262-d8ae-457d-add2-26dc18b4e376-logs\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137967 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-combined-ca-bundle\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.137994 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-logs\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138066 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138102 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138129 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138153 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data-custom\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138181 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cws4z\" (UniqueName: \"kubernetes.io/projected/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-kube-api-access-cws4z\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.138237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stsmj\" (UniqueName: \"kubernetes.io/projected/3ff9e262-d8ae-457d-add2-26dc18b4e376-kube-api-access-stsmj\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.139894 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff9e262-d8ae-457d-add2-26dc18b4e376-logs\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.165787 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-logs\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.193041 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.194555 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-combined-ca-bundle\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.200099 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-combined-ca-bundle\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.201696 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stsmj\" (UniqueName: \"kubernetes.io/projected/3ff9e262-d8ae-457d-add2-26dc18b4e376-kube-api-access-stsmj\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.203650 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-config-data-custom\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.205453 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.209789 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cws4z\" (UniqueName: \"kubernetes.io/projected/9b638fc0-e2a1-4624-aa58-525d3e06ff6e-kube-api-access-cws4z\") pod \"barbican-keystone-listener-57b89bfb9d-qwlsk\" (UID: \"9b638fc0-e2a1-4624-aa58-525d3e06ff6e\") " pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.224981 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ff9e262-d8ae-457d-add2-26dc18b4e376-config-data-custom\") pod \"barbican-worker-6654778d7f-p8j7f\" (UID: \"3ff9e262-d8ae-457d-add2-26dc18b4e376\") " pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.227332 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.254974 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6654778d7f-p8j7f" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.263051 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265079 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265141 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265210 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwkm\" (UniqueName: \"kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265236 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265268 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265295 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265322 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265376 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vp5m\" (UniqueName: \"kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265412 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.265467 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.266844 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.267881 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.268567 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.269171 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.269254 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.270059 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.295399 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwkm\" (UniqueName: \"kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm\") pod \"dnsmasq-dns-7dd4db9d9-vn8nb\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.309608 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.369599 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.369698 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.369723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.369744 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.369786 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vp5m\" (UniqueName: \"kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.372704 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.383390 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.383955 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.385724 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.395787 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vp5m\" (UniqueName: \"kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m\") pod \"barbican-api-6856f8c5f8-nptqt\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.572252 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.666533 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.719356 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6654778d7f-p8j7f"] Nov 23 08:14:41 crc kubenswrapper[4988]: I1123 08:14:41.814645 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57b89bfb9d-qwlsk"] Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.096864 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:14:42 crc kubenswrapper[4988]: W1123 08:14:42.111298 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7724d4f_e1aa_4e5c_b6e8_f478eedac62a.slice/crio-f94cc2eba734621f6a435b70698abae576be6a9611274fa10edf4b3f43db1f96 WatchSource:0}: Error finding container f94cc2eba734621f6a435b70698abae576be6a9611274fa10edf4b3f43db1f96: Status 404 returned error can't find the container with id f94cc2eba734621f6a435b70698abae576be6a9611274fa10edf4b3f43db1f96 Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.199343 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:14:42 crc kubenswrapper[4988]: W1123 08:14:42.208572 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1605124e_e60b_4099_b180_f4a7b5980c0e.slice/crio-3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f WatchSource:0}: Error finding container 3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f: Status 404 returned error can't find the container with id 3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.561560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6654778d7f-p8j7f" event={"ID":"3ff9e262-d8ae-457d-add2-26dc18b4e376","Type":"ContainerStarted","Data":"0c47e685e5cb9d3e1b991af1a2125b81c98151bbb17b821b069477bdd2507570"} Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.562821 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerStarted","Data":"c3b137f4895b262be98ee8b0ffb048ef1a40f4f54a1dfcf9e253c5bfd2e04431"} Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.562846 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerStarted","Data":"3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f"} Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.564207 4988 generic.go:334] "Generic (PLEG): container finished" podID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerID="b5bbfb8549c2c322b5c1072ae2b16c27cf4d4b6df916943c9f0d2183ac792c99" exitCode=0 Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.564252 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" event={"ID":"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a","Type":"ContainerDied","Data":"b5bbfb8549c2c322b5c1072ae2b16c27cf4d4b6df916943c9f0d2183ac792c99"} Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.564268 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" event={"ID":"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a","Type":"ContainerStarted","Data":"f94cc2eba734621f6a435b70698abae576be6a9611274fa10edf4b3f43db1f96"} Nov 23 08:14:42 crc kubenswrapper[4988]: I1123 08:14:42.566532 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" event={"ID":"9b638fc0-e2a1-4624-aa58-525d3e06ff6e","Type":"ContainerStarted","Data":"3acb10679a141eb39838c5c56c2c09aae450e76059949f9e147af829f483d88e"} Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.342190 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-74558fc978-jmz5k"] Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.343808 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.347828 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.348345 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.359317 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74558fc978-jmz5k"] Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.510326 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tsg\" (UniqueName: \"kubernetes.io/projected/d4e6ece2-9c04-46db-b25d-098cea0e6fde-kube-api-access-s5tsg\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.510371 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-combined-ca-bundle\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.510826 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-public-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.511346 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e6ece2-9c04-46db-b25d-098cea0e6fde-logs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.511407 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.511433 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data-custom\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.511509 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-internal-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e6ece2-9c04-46db-b25d-098cea0e6fde-logs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613223 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data-custom\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613267 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-internal-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613339 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tsg\" (UniqueName: \"kubernetes.io/projected/d4e6ece2-9c04-46db-b25d-098cea0e6fde-kube-api-access-s5tsg\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613365 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-combined-ca-bundle\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.613407 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-public-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.614624 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e6ece2-9c04-46db-b25d-098cea0e6fde-logs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.618827 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-public-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.620454 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-internal-tls-certs\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.620742 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-combined-ca-bundle\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.621734 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data-custom\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.622661 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e6ece2-9c04-46db-b25d-098cea0e6fde-config-data\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.636621 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tsg\" (UniqueName: \"kubernetes.io/projected/d4e6ece2-9c04-46db-b25d-098cea0e6fde-kube-api-access-s5tsg\") pod \"barbican-api-74558fc978-jmz5k\" (UID: \"d4e6ece2-9c04-46db-b25d-098cea0e6fde\") " pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:43 crc kubenswrapper[4988]: I1123 08:14:43.819268 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.288340 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-74558fc978-jmz5k"] Nov 23 08:14:44 crc kubenswrapper[4988]: W1123 08:14:44.292664 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4e6ece2_9c04_46db_b25d_098cea0e6fde.slice/crio-b9f306b536ebc45215e80b860d1b5fca03d06a356c61edd8b53dde32b1909adc WatchSource:0}: Error finding container b9f306b536ebc45215e80b860d1b5fca03d06a356c61edd8b53dde32b1909adc: Status 404 returned error can't find the container with id b9f306b536ebc45215e80b860d1b5fca03d06a356c61edd8b53dde32b1909adc Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.590567 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6654778d7f-p8j7f" event={"ID":"3ff9e262-d8ae-457d-add2-26dc18b4e376","Type":"ContainerStarted","Data":"e523b9a613fb7aad9b7e3cab88bb835515726bce7485ad559c76e5a56a3766d4"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.590924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6654778d7f-p8j7f" event={"ID":"3ff9e262-d8ae-457d-add2-26dc18b4e376","Type":"ContainerStarted","Data":"97b22cf783b20050cb3c65fa44a50c65656b0ae9ab7ba7888c347e4d5cfcd15c"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.594784 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerStarted","Data":"aae762b6c8986ec19cfe8b9431684dc1961b9de235dd0bef2636e1c325c85ed6"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.594974 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.595139 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.600725 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74558fc978-jmz5k" event={"ID":"d4e6ece2-9c04-46db-b25d-098cea0e6fde","Type":"ContainerStarted","Data":"a0c0c8d380dcf80c1083905e81aa1df135ca45abf40eb09d6f545a4424cdff86"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.600869 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74558fc978-jmz5k" event={"ID":"d4e6ece2-9c04-46db-b25d-098cea0e6fde","Type":"ContainerStarted","Data":"b9f306b536ebc45215e80b860d1b5fca03d06a356c61edd8b53dde32b1909adc"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.602611 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" event={"ID":"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a","Type":"ContainerStarted","Data":"8dd7e323d290e23af89b58b973026eb2d4652523c3fcf75fdd3d2d58fa07241f"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.603628 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.614954 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6654778d7f-p8j7f" podStartSLOduration=3.120265515 podStartE2EDuration="4.614767214s" podCreationTimestamp="2025-11-23 08:14:40 +0000 UTC" firstStartedPulling="2025-11-23 08:14:41.745152332 +0000 UTC m=+5334.053665095" lastFinishedPulling="2025-11-23 08:14:43.239654031 +0000 UTC m=+5335.548166794" observedRunningTime="2025-11-23 08:14:44.613811971 +0000 UTC m=+5336.922324734" watchObservedRunningTime="2025-11-23 08:14:44.614767214 +0000 UTC m=+5336.923279987" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.616938 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" event={"ID":"9b638fc0-e2a1-4624-aa58-525d3e06ff6e","Type":"ContainerStarted","Data":"5dc5b20bc47db07fbe1750bbe9fe8692b63f59e18e6464abfe811a9ce8015807"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.616988 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" event={"ID":"9b638fc0-e2a1-4624-aa58-525d3e06ff6e","Type":"ContainerStarted","Data":"4000fb5622f994687c1d6fe74d9cee8df5369f9bb4a90db83c37e0a70971cd00"} Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.635785 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6856f8c5f8-nptqt" podStartSLOduration=3.635764479 podStartE2EDuration="3.635764479s" podCreationTimestamp="2025-11-23 08:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:14:44.633648777 +0000 UTC m=+5336.942161550" watchObservedRunningTime="2025-11-23 08:14:44.635764479 +0000 UTC m=+5336.944277242" Nov 23 08:14:44 crc kubenswrapper[4988]: I1123 08:14:44.659666 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" podStartSLOduration=3.659639894 podStartE2EDuration="3.659639894s" podCreationTimestamp="2025-11-23 08:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:14:44.653625896 +0000 UTC m=+5336.962138659" watchObservedRunningTime="2025-11-23 08:14:44.659639894 +0000 UTC m=+5336.968152657" Nov 23 08:14:45 crc kubenswrapper[4988]: I1123 08:14:45.630777 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-74558fc978-jmz5k" event={"ID":"d4e6ece2-9c04-46db-b25d-098cea0e6fde","Type":"ContainerStarted","Data":"76e4491a34dc92c2c00b51ffdf7219f9bd64cf8eae26672866e3b234bddca809"} Nov 23 08:14:45 crc kubenswrapper[4988]: I1123 08:14:45.680613 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57b89bfb9d-qwlsk" podStartSLOduration=4.269038223 podStartE2EDuration="5.680590609s" podCreationTimestamp="2025-11-23 08:14:40 +0000 UTC" firstStartedPulling="2025-11-23 08:14:41.828056194 +0000 UTC m=+5334.136568957" lastFinishedPulling="2025-11-23 08:14:43.23960858 +0000 UTC m=+5335.548121343" observedRunningTime="2025-11-23 08:14:44.678968187 +0000 UTC m=+5336.987480960" watchObservedRunningTime="2025-11-23 08:14:45.680590609 +0000 UTC m=+5337.989103382" Nov 23 08:14:45 crc kubenswrapper[4988]: I1123 08:14:45.683438 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-74558fc978-jmz5k" podStartSLOduration=2.683426738 podStartE2EDuration="2.683426738s" podCreationTimestamp="2025-11-23 08:14:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:14:45.67046442 +0000 UTC m=+5337.978977223" watchObservedRunningTime="2025-11-23 08:14:45.683426738 +0000 UTC m=+5337.991939511" Nov 23 08:14:46 crc kubenswrapper[4988]: I1123 08:14:46.640281 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:46 crc kubenswrapper[4988]: I1123 08:14:46.640392 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:47 crc kubenswrapper[4988]: I1123 08:14:47.496730 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:14:47 crc kubenswrapper[4988]: E1123 08:14:47.497614 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:14:51 crc kubenswrapper[4988]: I1123 08:14:51.574588 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:14:51 crc kubenswrapper[4988]: I1123 08:14:51.662077 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:14:51 crc kubenswrapper[4988]: I1123 08:14:51.662492 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="dnsmasq-dns" containerID="cri-o://31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29" gracePeriod=10 Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.173535 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.212567 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config\") pod \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.212840 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc\") pod \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.212970 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb\") pod \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.213222 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb\") pod \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.213307 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgct\" (UniqueName: \"kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct\") pod \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\" (UID: \"5e5cfc66-4b4d-476c-a704-0dc6de0f724f\") " Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.240415 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct" (OuterVolumeSpecName: "kube-api-access-fxgct") pod "5e5cfc66-4b4d-476c-a704-0dc6de0f724f" (UID: "5e5cfc66-4b4d-476c-a704-0dc6de0f724f"). InnerVolumeSpecName "kube-api-access-fxgct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.260021 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e5cfc66-4b4d-476c-a704-0dc6de0f724f" (UID: "5e5cfc66-4b4d-476c-a704-0dc6de0f724f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.260516 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e5cfc66-4b4d-476c-a704-0dc6de0f724f" (UID: "5e5cfc66-4b4d-476c-a704-0dc6de0f724f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.267949 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e5cfc66-4b4d-476c-a704-0dc6de0f724f" (UID: "5e5cfc66-4b4d-476c-a704-0dc6de0f724f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.277334 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config" (OuterVolumeSpecName: "config") pod "5e5cfc66-4b4d-476c-a704-0dc6de0f724f" (UID: "5e5cfc66-4b4d-476c-a704-0dc6de0f724f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.315454 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.315489 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.315499 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgct\" (UniqueName: \"kubernetes.io/projected/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-kube-api-access-fxgct\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.315510 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.315519 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e5cfc66-4b4d-476c-a704-0dc6de0f724f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.727456 4988 generic.go:334] "Generic (PLEG): container finished" podID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerID="31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29" exitCode=0 Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.727840 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" event={"ID":"5e5cfc66-4b4d-476c-a704-0dc6de0f724f","Type":"ContainerDied","Data":"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29"} Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.727883 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" event={"ID":"5e5cfc66-4b4d-476c-a704-0dc6de0f724f","Type":"ContainerDied","Data":"e937d1e0724ce08418bf008bc7ee8d62e6c27067913e9bf8bd09b60fe9aa869b"} Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.727909 4988 scope.go:117] "RemoveContainer" containerID="31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.728064 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8688b44c97-kwvpk" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.756532 4988 scope.go:117] "RemoveContainer" containerID="7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.759449 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.773562 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8688b44c97-kwvpk"] Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.778943 4988 scope.go:117] "RemoveContainer" containerID="31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29" Nov 23 08:14:52 crc kubenswrapper[4988]: E1123 08:14:52.782669 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29\": container with ID starting with 31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29 not found: ID does not exist" containerID="31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.782722 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29"} err="failed to get container status \"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29\": rpc error: code = NotFound desc = could not find container \"31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29\": container with ID starting with 31bad8897b85a855b7e9bc12d5eaf164a90918187b7efacb402cc085a4501c29 not found: ID does not exist" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.782753 4988 scope.go:117] "RemoveContainer" containerID="7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e" Nov 23 08:14:52 crc kubenswrapper[4988]: E1123 08:14:52.783107 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e\": container with ID starting with 7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e not found: ID does not exist" containerID="7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e" Nov 23 08:14:52 crc kubenswrapper[4988]: I1123 08:14:52.783160 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e"} err="failed to get container status \"7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e\": rpc error: code = NotFound desc = could not find container \"7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e\": container with ID starting with 7de3cbe5ec8a071cc47afb22f5a8a9329609f52b51c6679f1ad4cbd1e47d343e not found: ID does not exist" Nov 23 08:14:53 crc kubenswrapper[4988]: I1123 08:14:53.100590 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:53 crc kubenswrapper[4988]: I1123 08:14:53.182515 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:54 crc kubenswrapper[4988]: I1123 08:14:54.509843 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" path="/var/lib/kubelet/pods/5e5cfc66-4b4d-476c-a704-0dc6de0f724f/volumes" Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.088043 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.178887 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-74558fc978-jmz5k" Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.260763 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.260970 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6856f8c5f8-nptqt" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api-log" containerID="cri-o://c3b137f4895b262be98ee8b0ffb048ef1a40f4f54a1dfcf9e253c5bfd2e04431" gracePeriod=30 Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.261122 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6856f8c5f8-nptqt" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api" containerID="cri-o://aae762b6c8986ec19cfe8b9431684dc1961b9de235dd0bef2636e1c325c85ed6" gracePeriod=30 Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.755882 4988 generic.go:334] "Generic (PLEG): container finished" podID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerID="c3b137f4895b262be98ee8b0ffb048ef1a40f4f54a1dfcf9e253c5bfd2e04431" exitCode=143 Nov 23 08:14:55 crc kubenswrapper[4988]: I1123 08:14:55.755971 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerDied","Data":"c3b137f4895b262be98ee8b0ffb048ef1a40f4f54a1dfcf9e253c5bfd2e04431"} Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.416572 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6856f8c5f8-nptqt" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.31:9311/healthcheck\": read tcp 10.217.0.2:59008->10.217.1.31:9311: read: connection reset by peer" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.416626 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6856f8c5f8-nptqt" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.31:9311/healthcheck\": read tcp 10.217.0.2:59022->10.217.1.31:9311: read: connection reset by peer" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.784497 4988 generic.go:334] "Generic (PLEG): container finished" podID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerID="aae762b6c8986ec19cfe8b9431684dc1961b9de235dd0bef2636e1c325c85ed6" exitCode=0 Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.784534 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerDied","Data":"aae762b6c8986ec19cfe8b9431684dc1961b9de235dd0bef2636e1c325c85ed6"} Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.784555 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6856f8c5f8-nptqt" event={"ID":"1605124e-e60b-4099-b180-f4a7b5980c0e","Type":"ContainerDied","Data":"3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f"} Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.784566 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8de737c5396595a0b8650d8bf0bead81b4320ad2942985c7f9017bdf21d26f" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.839178 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.941455 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle\") pod \"1605124e-e60b-4099-b180-f4a7b5980c0e\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.941599 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs\") pod \"1605124e-e60b-4099-b180-f4a7b5980c0e\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.941637 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vp5m\" (UniqueName: \"kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m\") pod \"1605124e-e60b-4099-b180-f4a7b5980c0e\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.941661 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom\") pod \"1605124e-e60b-4099-b180-f4a7b5980c0e\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.941734 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data\") pod \"1605124e-e60b-4099-b180-f4a7b5980c0e\" (UID: \"1605124e-e60b-4099-b180-f4a7b5980c0e\") " Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.942137 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs" (OuterVolumeSpecName: "logs") pod "1605124e-e60b-4099-b180-f4a7b5980c0e" (UID: "1605124e-e60b-4099-b180-f4a7b5980c0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.947864 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1605124e-e60b-4099-b180-f4a7b5980c0e" (UID: "1605124e-e60b-4099-b180-f4a7b5980c0e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.950504 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m" (OuterVolumeSpecName: "kube-api-access-5vp5m") pod "1605124e-e60b-4099-b180-f4a7b5980c0e" (UID: "1605124e-e60b-4099-b180-f4a7b5980c0e"). InnerVolumeSpecName "kube-api-access-5vp5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.982387 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1605124e-e60b-4099-b180-f4a7b5980c0e" (UID: "1605124e-e60b-4099-b180-f4a7b5980c0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:14:58 crc kubenswrapper[4988]: I1123 08:14:58.988006 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data" (OuterVolumeSpecName: "config-data") pod "1605124e-e60b-4099-b180-f4a7b5980c0e" (UID: "1605124e-e60b-4099-b180-f4a7b5980c0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.043888 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.043914 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1605124e-e60b-4099-b180-f4a7b5980c0e-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.043925 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vp5m\" (UniqueName: \"kubernetes.io/projected/1605124e-e60b-4099-b180-f4a7b5980c0e-kube-api-access-5vp5m\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.043936 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.043943 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1605124e-e60b-4099-b180-f4a7b5980c0e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.497612 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:14:59 crc kubenswrapper[4988]: E1123 08:14:59.498365 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.807500 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6856f8c5f8-nptqt" Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.870117 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:14:59 crc kubenswrapper[4988]: I1123 08:14:59.878035 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6856f8c5f8-nptqt"] Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.154358 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh"] Nov 23 08:15:00 crc kubenswrapper[4988]: E1123 08:15:00.155033 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="init" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.155082 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="init" Nov 23 08:15:00 crc kubenswrapper[4988]: E1123 08:15:00.155125 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="dnsmasq-dns" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.155142 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="dnsmasq-dns" Nov 23 08:15:00 crc kubenswrapper[4988]: E1123 08:15:00.155185 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api-log" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.155237 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api-log" Nov 23 08:15:00 crc kubenswrapper[4988]: E1123 08:15:00.155273 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.155289 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.156976 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e5cfc66-4b4d-476c-a704-0dc6de0f724f" containerName="dnsmasq-dns" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.157031 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api-log" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.157067 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" containerName="barbican-api" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.158448 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.161914 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.163184 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.169453 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.169735 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gln8h\" (UniqueName: \"kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.169821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.204642 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh"] Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.270826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gln8h\" (UniqueName: \"kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.270893 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.270931 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.271940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.275341 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.315155 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gln8h\" (UniqueName: \"kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h\") pod \"collect-profiles-29398095-sbzjh\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.486537 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.516228 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1605124e-e60b-4099-b180-f4a7b5980c0e" path="/var/lib/kubelet/pods/1605124e-e60b-4099-b180-f4a7b5980c0e/volumes" Nov 23 08:15:00 crc kubenswrapper[4988]: I1123 08:15:00.965770 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh"] Nov 23 08:15:00 crc kubenswrapper[4988]: W1123 08:15:00.967499 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684c09c5_e69f_40d1_b23b_2f0b6df72025.slice/crio-d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890 WatchSource:0}: Error finding container d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890: Status 404 returned error can't find the container with id d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890 Nov 23 08:15:01 crc kubenswrapper[4988]: I1123 08:15:01.829786 4988 generic.go:334] "Generic (PLEG): container finished" podID="684c09c5-e69f-40d1-b23b-2f0b6df72025" containerID="b762c123fbccf4628ab69fc893fade95909e980ce4a3881fcd7e2892321eeacc" exitCode=0 Nov 23 08:15:01 crc kubenswrapper[4988]: I1123 08:15:01.829884 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" event={"ID":"684c09c5-e69f-40d1-b23b-2f0b6df72025","Type":"ContainerDied","Data":"b762c123fbccf4628ab69fc893fade95909e980ce4a3881fcd7e2892321eeacc"} Nov 23 08:15:01 crc kubenswrapper[4988]: I1123 08:15:01.830098 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" event={"ID":"684c09c5-e69f-40d1-b23b-2f0b6df72025","Type":"ContainerStarted","Data":"d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890"} Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.136689 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.321341 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gln8h\" (UniqueName: \"kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h\") pod \"684c09c5-e69f-40d1-b23b-2f0b6df72025\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.321533 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume\") pod \"684c09c5-e69f-40d1-b23b-2f0b6df72025\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.321662 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume\") pod \"684c09c5-e69f-40d1-b23b-2f0b6df72025\" (UID: \"684c09c5-e69f-40d1-b23b-2f0b6df72025\") " Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.322709 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume" (OuterVolumeSpecName: "config-volume") pod "684c09c5-e69f-40d1-b23b-2f0b6df72025" (UID: "684c09c5-e69f-40d1-b23b-2f0b6df72025"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.326925 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "684c09c5-e69f-40d1-b23b-2f0b6df72025" (UID: "684c09c5-e69f-40d1-b23b-2f0b6df72025"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.328180 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h" (OuterVolumeSpecName: "kube-api-access-gln8h") pod "684c09c5-e69f-40d1-b23b-2f0b6df72025" (UID: "684c09c5-e69f-40d1-b23b-2f0b6df72025"). InnerVolumeSpecName "kube-api-access-gln8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.423120 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/684c09c5-e69f-40d1-b23b-2f0b6df72025-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.423155 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gln8h\" (UniqueName: \"kubernetes.io/projected/684c09c5-e69f-40d1-b23b-2f0b6df72025-kube-api-access-gln8h\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.423165 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/684c09c5-e69f-40d1-b23b-2f0b6df72025-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.846651 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" event={"ID":"684c09c5-e69f-40d1-b23b-2f0b6df72025","Type":"ContainerDied","Data":"d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890"} Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.846722 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d180ef9b1dc1a2b91d6674c33ffaed2e0f5238cd9ccba1c4235d1ce7b1847890" Nov 23 08:15:03 crc kubenswrapper[4988]: I1123 08:15:03.846782 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh" Nov 23 08:15:04 crc kubenswrapper[4988]: I1123 08:15:04.236184 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46"] Nov 23 08:15:04 crc kubenswrapper[4988]: I1123 08:15:04.248362 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-g6d46"] Nov 23 08:15:04 crc kubenswrapper[4988]: I1123 08:15:04.511729 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a307d56-a956-4aae-84ad-49f0559c6252" path="/var/lib/kubelet/pods/4a307d56-a956-4aae-84ad-49f0559c6252/volumes" Nov 23 08:15:11 crc kubenswrapper[4988]: I1123 08:15:11.496119 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:15:11 crc kubenswrapper[4988]: E1123 08:15:11.498675 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.339366 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-f2zbd"] Nov 23 08:15:22 crc kubenswrapper[4988]: E1123 08:15:22.340378 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684c09c5-e69f-40d1-b23b-2f0b6df72025" containerName="collect-profiles" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.340395 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="684c09c5-e69f-40d1-b23b-2f0b6df72025" containerName="collect-profiles" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.343886 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="684c09c5-e69f-40d1-b23b-2f0b6df72025" containerName="collect-profiles" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.344631 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.347697 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f2zbd"] Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.444061 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4926-account-create-qq7gk"] Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.445430 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.447144 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.454250 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4926-account-create-qq7gk"] Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.488701 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.488889 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmkbl\" (UniqueName: \"kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.590305 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.591132 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.591328 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.591477 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx45\" (UniqueName: \"kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.591574 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmkbl\" (UniqueName: \"kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.611928 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmkbl\" (UniqueName: \"kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl\") pod \"neutron-db-create-f2zbd\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.662619 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.692928 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.693182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx45\" (UniqueName: \"kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.693646 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.710450 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx45\" (UniqueName: \"kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45\") pod \"neutron-4926-account-create-qq7gk\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:22 crc kubenswrapper[4988]: I1123 08:15:22.761938 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:23 crc kubenswrapper[4988]: I1123 08:15:23.115734 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-f2zbd"] Nov 23 08:15:23 crc kubenswrapper[4988]: W1123 08:15:23.327331 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod574ccc4f_ea1b_4a00_b4bf_63c826025a09.slice/crio-68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e WatchSource:0}: Error finding container 68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e: Status 404 returned error can't find the container with id 68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e Nov 23 08:15:23 crc kubenswrapper[4988]: I1123 08:15:23.328712 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4926-account-create-qq7gk"] Nov 23 08:15:23 crc kubenswrapper[4988]: I1123 08:15:23.496156 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:15:23 crc kubenswrapper[4988]: E1123 08:15:23.496652 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.077151 4988 generic.go:334] "Generic (PLEG): container finished" podID="574ccc4f-ea1b-4a00-b4bf-63c826025a09" containerID="ecce92f3dbdb805f444c97e95c4868f9b34038297615d444909351568a795747" exitCode=0 Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.077251 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4926-account-create-qq7gk" event={"ID":"574ccc4f-ea1b-4a00-b4bf-63c826025a09","Type":"ContainerDied","Data":"ecce92f3dbdb805f444c97e95c4868f9b34038297615d444909351568a795747"} Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.077316 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4926-account-create-qq7gk" event={"ID":"574ccc4f-ea1b-4a00-b4bf-63c826025a09","Type":"ContainerStarted","Data":"68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e"} Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.080874 4988 generic.go:334] "Generic (PLEG): container finished" podID="1861e644-6cb5-4be2-a63d-eb80cfb96dc0" containerID="fb1098e99cc61b4f1cf86429d9a772f38207c1b70424bd9990431600c0788227" exitCode=0 Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.080953 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f2zbd" event={"ID":"1861e644-6cb5-4be2-a63d-eb80cfb96dc0","Type":"ContainerDied","Data":"fb1098e99cc61b4f1cf86429d9a772f38207c1b70424bd9990431600c0788227"} Nov 23 08:15:24 crc kubenswrapper[4988]: I1123 08:15:24.080999 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f2zbd" event={"ID":"1861e644-6cb5-4be2-a63d-eb80cfb96dc0","Type":"ContainerStarted","Data":"f6e508d5dafea3095a7546f57e01cd8eb8a75e67842ba055a51893222b9d03b5"} Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.492525 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.542252 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.593048 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts\") pod \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.593133 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx45\" (UniqueName: \"kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45\") pod \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\" (UID: \"574ccc4f-ea1b-4a00-b4bf-63c826025a09\") " Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.593943 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "574ccc4f-ea1b-4a00-b4bf-63c826025a09" (UID: "574ccc4f-ea1b-4a00-b4bf-63c826025a09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.600658 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45" (OuterVolumeSpecName: "kube-api-access-fqx45") pod "574ccc4f-ea1b-4a00-b4bf-63c826025a09" (UID: "574ccc4f-ea1b-4a00-b4bf-63c826025a09"). InnerVolumeSpecName "kube-api-access-fqx45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.694356 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts\") pod \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.694454 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmkbl\" (UniqueName: \"kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl\") pod \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\" (UID: \"1861e644-6cb5-4be2-a63d-eb80cfb96dc0\") " Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.695438 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/574ccc4f-ea1b-4a00-b4bf-63c826025a09-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.695483 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx45\" (UniqueName: \"kubernetes.io/projected/574ccc4f-ea1b-4a00-b4bf-63c826025a09-kube-api-access-fqx45\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.697321 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl" (OuterVolumeSpecName: "kube-api-access-qmkbl") pod "1861e644-6cb5-4be2-a63d-eb80cfb96dc0" (UID: "1861e644-6cb5-4be2-a63d-eb80cfb96dc0"). InnerVolumeSpecName "kube-api-access-qmkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.697453 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1861e644-6cb5-4be2-a63d-eb80cfb96dc0" (UID: "1861e644-6cb5-4be2-a63d-eb80cfb96dc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.796981 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmkbl\" (UniqueName: \"kubernetes.io/projected/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-kube-api-access-qmkbl\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:25 crc kubenswrapper[4988]: I1123 08:15:25.797022 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1861e644-6cb5-4be2-a63d-eb80cfb96dc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.105378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4926-account-create-qq7gk" event={"ID":"574ccc4f-ea1b-4a00-b4bf-63c826025a09","Type":"ContainerDied","Data":"68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e"} Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.105415 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a7de412ada0eb4954a7e8736419f48a231db08b5030d69ab53813b6795903e" Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.105449 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4926-account-create-qq7gk" Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.107759 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-f2zbd" event={"ID":"1861e644-6cb5-4be2-a63d-eb80cfb96dc0","Type":"ContainerDied","Data":"f6e508d5dafea3095a7546f57e01cd8eb8a75e67842ba055a51893222b9d03b5"} Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.107780 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e508d5dafea3095a7546f57e01cd8eb8a75e67842ba055a51893222b9d03b5" Nov 23 08:15:26 crc kubenswrapper[4988]: I1123 08:15:26.107857 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-f2zbd" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.618960 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b99ns"] Nov 23 08:15:27 crc kubenswrapper[4988]: E1123 08:15:27.619360 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1861e644-6cb5-4be2-a63d-eb80cfb96dc0" containerName="mariadb-database-create" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.619375 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1861e644-6cb5-4be2-a63d-eb80cfb96dc0" containerName="mariadb-database-create" Nov 23 08:15:27 crc kubenswrapper[4988]: E1123 08:15:27.619412 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574ccc4f-ea1b-4a00-b4bf-63c826025a09" containerName="mariadb-account-create" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.619421 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="574ccc4f-ea1b-4a00-b4bf-63c826025a09" containerName="mariadb-account-create" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.619633 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1861e644-6cb5-4be2-a63d-eb80cfb96dc0" containerName="mariadb-database-create" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.619955 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="574ccc4f-ea1b-4a00-b4bf-63c826025a09" containerName="mariadb-account-create" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.620588 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.623031 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wjj8f" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.624017 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.624410 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.636635 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b99ns"] Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.641654 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.641728 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.641783 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxch\" (UniqueName: \"kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.743673 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.743726 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.743771 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxch\" (UniqueName: \"kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.756083 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.756221 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.785861 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxch\" (UniqueName: \"kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch\") pod \"neutron-db-sync-b99ns\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:27 crc kubenswrapper[4988]: I1123 08:15:27.942092 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:28 crc kubenswrapper[4988]: I1123 08:15:28.166978 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b99ns"] Nov 23 08:15:29 crc kubenswrapper[4988]: I1123 08:15:29.144312 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b99ns" event={"ID":"dcc2308d-5918-4310-b702-ed5b3d581345","Type":"ContainerStarted","Data":"672ea2b919cbb690ee278ac7e814e8d3f244bfbdea3b4c25fedcd9bdd7b959a4"} Nov 23 08:15:29 crc kubenswrapper[4988]: I1123 08:15:29.144692 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b99ns" event={"ID":"dcc2308d-5918-4310-b702-ed5b3d581345","Type":"ContainerStarted","Data":"d3231178b953005bb6379dc899202723c13404b22487cdb39f2ef9b0efddf3a4"} Nov 23 08:15:29 crc kubenswrapper[4988]: I1123 08:15:29.181630 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b99ns" podStartSLOduration=2.181609267 podStartE2EDuration="2.181609267s" podCreationTimestamp="2025-11-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:15:29.173229932 +0000 UTC m=+5381.481742735" watchObservedRunningTime="2025-11-23 08:15:29.181609267 +0000 UTC m=+5381.490122030" Nov 23 08:15:33 crc kubenswrapper[4988]: I1123 08:15:33.192651 4988 generic.go:334] "Generic (PLEG): container finished" podID="dcc2308d-5918-4310-b702-ed5b3d581345" containerID="672ea2b919cbb690ee278ac7e814e8d3f244bfbdea3b4c25fedcd9bdd7b959a4" exitCode=0 Nov 23 08:15:33 crc kubenswrapper[4988]: I1123 08:15:33.192779 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b99ns" event={"ID":"dcc2308d-5918-4310-b702-ed5b3d581345","Type":"ContainerDied","Data":"672ea2b919cbb690ee278ac7e814e8d3f244bfbdea3b4c25fedcd9bdd7b959a4"} Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.572050 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.737771 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle\") pod \"dcc2308d-5918-4310-b702-ed5b3d581345\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.737950 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config\") pod \"dcc2308d-5918-4310-b702-ed5b3d581345\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.738032 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfxch\" (UniqueName: \"kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch\") pod \"dcc2308d-5918-4310-b702-ed5b3d581345\" (UID: \"dcc2308d-5918-4310-b702-ed5b3d581345\") " Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.744640 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch" (OuterVolumeSpecName: "kube-api-access-gfxch") pod "dcc2308d-5918-4310-b702-ed5b3d581345" (UID: "dcc2308d-5918-4310-b702-ed5b3d581345"). InnerVolumeSpecName "kube-api-access-gfxch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.768351 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config" (OuterVolumeSpecName: "config") pod "dcc2308d-5918-4310-b702-ed5b3d581345" (UID: "dcc2308d-5918-4310-b702-ed5b3d581345"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.782973 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcc2308d-5918-4310-b702-ed5b3d581345" (UID: "dcc2308d-5918-4310-b702-ed5b3d581345"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.839747 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.839770 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfxch\" (UniqueName: \"kubernetes.io/projected/dcc2308d-5918-4310-b702-ed5b3d581345-kube-api-access-gfxch\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:34 crc kubenswrapper[4988]: I1123 08:15:34.839780 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcc2308d-5918-4310-b702-ed5b3d581345-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.217712 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b99ns" event={"ID":"dcc2308d-5918-4310-b702-ed5b3d581345","Type":"ContainerDied","Data":"d3231178b953005bb6379dc899202723c13404b22487cdb39f2ef9b0efddf3a4"} Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.217755 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3231178b953005bb6379dc899202723c13404b22487cdb39f2ef9b0efddf3a4" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.218112 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b99ns" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.496050 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:15:35 crc kubenswrapper[4988]: E1123 08:15:35.496292 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.504023 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:15:35 crc kubenswrapper[4988]: E1123 08:15:35.504612 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc2308d-5918-4310-b702-ed5b3d581345" containerName="neutron-db-sync" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.504633 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc2308d-5918-4310-b702-ed5b3d581345" containerName="neutron-db-sync" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.504826 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc2308d-5918-4310-b702-ed5b3d581345" containerName="neutron-db-sync" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.505943 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.533535 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.593454 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.594734 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.599441 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wjj8f" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.599476 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.599643 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.599706 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.609130 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.652921 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.653160 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqcjg\" (UniqueName: \"kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.653271 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.653338 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.653491 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755347 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755433 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755473 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755507 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqcjg\" (UniqueName: \"kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755540 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755564 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755588 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755603 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmvx\" (UniqueName: \"kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.755625 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.757837 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.757934 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.757932 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.758057 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.778040 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqcjg\" (UniqueName: \"kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg\") pod \"dnsmasq-dns-5cbc6587fc-rjddp\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.838624 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.857474 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.857542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.857610 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.857634 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gmvx\" (UniqueName: \"kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.857662 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.862279 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.862812 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.862940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.863073 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.891862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gmvx\" (UniqueName: \"kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx\") pod \"neutron-5c5f4f585b-cvg4g\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:35 crc kubenswrapper[4988]: I1123 08:15:35.913940 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:36 crc kubenswrapper[4988]: I1123 08:15:36.352099 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:15:36 crc kubenswrapper[4988]: I1123 08:15:36.523362 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:15:36 crc kubenswrapper[4988]: W1123 08:15:36.535778 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb2d5e89_7c95_45dd_b271_44a23cb9a97c.slice/crio-6541ea3ab451c5ba6e12a24fda52b93bfb4b1d1ec4142ccb225ade58cf9a33ae WatchSource:0}: Error finding container 6541ea3ab451c5ba6e12a24fda52b93bfb4b1d1ec4142ccb225ade58cf9a33ae: Status 404 returned error can't find the container with id 6541ea3ab451c5ba6e12a24fda52b93bfb4b1d1ec4142ccb225ade58cf9a33ae Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.254761 4988 generic.go:334] "Generic (PLEG): container finished" podID="05041a18-d989-44c5-a04d-9d4836ae59be" containerID="827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd" exitCode=0 Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.255051 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" event={"ID":"05041a18-d989-44c5-a04d-9d4836ae59be","Type":"ContainerDied","Data":"827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd"} Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.255092 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" event={"ID":"05041a18-d989-44c5-a04d-9d4836ae59be","Type":"ContainerStarted","Data":"6927094c8ef0a711ffbcad81162a71369a2c6d9ecd991feab5bce1403cccbdec"} Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.268788 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerStarted","Data":"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af"} Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.268853 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerStarted","Data":"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6"} Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.268863 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerStarted","Data":"6541ea3ab451c5ba6e12a24fda52b93bfb4b1d1ec4142ccb225ade58cf9a33ae"} Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.270056 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:15:37 crc kubenswrapper[4988]: I1123 08:15:37.335415 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c5f4f585b-cvg4g" podStartSLOduration=2.335396231 podStartE2EDuration="2.335396231s" podCreationTimestamp="2025-11-23 08:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:15:37.314849577 +0000 UTC m=+5389.623362350" watchObservedRunningTime="2025-11-23 08:15:37.335396231 +0000 UTC m=+5389.643908994" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.278386 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" event={"ID":"05041a18-d989-44c5-a04d-9d4836ae59be","Type":"ContainerStarted","Data":"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9"} Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.298288 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" podStartSLOduration=3.298270883 podStartE2EDuration="3.298270883s" podCreationTimestamp="2025-11-23 08:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:15:38.295226849 +0000 UTC m=+5390.603739612" watchObservedRunningTime="2025-11-23 08:15:38.298270883 +0000 UTC m=+5390.606783636" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.585957 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-798d8dcd57-qzxxn"] Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.588318 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.590780 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.591018 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.600310 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798d8dcd57-qzxxn"] Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.706737 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.706790 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-public-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.706810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-combined-ca-bundle\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.706835 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-httpd-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.706878 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-ovndb-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.707037 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2xg\" (UniqueName: \"kubernetes.io/projected/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-kube-api-access-ln2xg\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.707086 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-internal-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809416 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809498 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-public-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809525 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-combined-ca-bundle\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809556 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-httpd-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809610 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-ovndb-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2xg\" (UniqueName: \"kubernetes.io/projected/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-kube-api-access-ln2xg\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.809679 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-internal-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.816536 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-ovndb-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.816951 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-internal-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.817344 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-combined-ca-bundle\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.818783 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-public-tls-certs\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.819262 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.824828 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-httpd-config\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.833140 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2xg\" (UniqueName: \"kubernetes.io/projected/733fec61-c6d0-4ab6-b4c6-adfa6f18290d-kube-api-access-ln2xg\") pod \"neutron-798d8dcd57-qzxxn\" (UID: \"733fec61-c6d0-4ab6-b4c6-adfa6f18290d\") " pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:38 crc kubenswrapper[4988]: I1123 08:15:38.903830 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:39 crc kubenswrapper[4988]: I1123 08:15:39.284862 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:39 crc kubenswrapper[4988]: I1123 08:15:39.446023 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-798d8dcd57-qzxxn"] Nov 23 08:15:39 crc kubenswrapper[4988]: W1123 08:15:39.448272 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod733fec61_c6d0_4ab6_b4c6_adfa6f18290d.slice/crio-89ee5007f7a51e4ca8b606908b85542aca1d3ac1024a2ebcf9ea48d56c4a3464 WatchSource:0}: Error finding container 89ee5007f7a51e4ca8b606908b85542aca1d3ac1024a2ebcf9ea48d56c4a3464: Status 404 returned error can't find the container with id 89ee5007f7a51e4ca8b606908b85542aca1d3ac1024a2ebcf9ea48d56c4a3464 Nov 23 08:15:40 crc kubenswrapper[4988]: I1123 08:15:40.293814 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d8dcd57-qzxxn" event={"ID":"733fec61-c6d0-4ab6-b4c6-adfa6f18290d","Type":"ContainerStarted","Data":"c1d816a156a93d3ceaf7a173972eabe493e99c3f6c561095541a3bd5526876a3"} Nov 23 08:15:40 crc kubenswrapper[4988]: I1123 08:15:40.294068 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d8dcd57-qzxxn" event={"ID":"733fec61-c6d0-4ab6-b4c6-adfa6f18290d","Type":"ContainerStarted","Data":"a54a2645a33c31078e463b6b443d229cc295d5cd5e92313f36fc214f7c194590"} Nov 23 08:15:40 crc kubenswrapper[4988]: I1123 08:15:40.294086 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-798d8dcd57-qzxxn" event={"ID":"733fec61-c6d0-4ab6-b4c6-adfa6f18290d","Type":"ContainerStarted","Data":"89ee5007f7a51e4ca8b606908b85542aca1d3ac1024a2ebcf9ea48d56c4a3464"} Nov 23 08:15:40 crc kubenswrapper[4988]: I1123 08:15:40.316763 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-798d8dcd57-qzxxn" podStartSLOduration=2.316738219 podStartE2EDuration="2.316738219s" podCreationTimestamp="2025-11-23 08:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:15:40.309621455 +0000 UTC m=+5392.618134218" watchObservedRunningTime="2025-11-23 08:15:40.316738219 +0000 UTC m=+5392.625250982" Nov 23 08:15:41 crc kubenswrapper[4988]: I1123 08:15:41.305955 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:15:45 crc kubenswrapper[4988]: I1123 08:15:45.841464 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:15:45 crc kubenswrapper[4988]: I1123 08:15:45.925881 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:15:45 crc kubenswrapper[4988]: I1123 08:15:45.926516 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="dnsmasq-dns" containerID="cri-o://8dd7e323d290e23af89b58b973026eb2d4652523c3fcf75fdd3d2d58fa07241f" gracePeriod=10 Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.354399 4988 generic.go:334] "Generic (PLEG): container finished" podID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerID="8dd7e323d290e23af89b58b973026eb2d4652523c3fcf75fdd3d2d58fa07241f" exitCode=0 Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.354656 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" event={"ID":"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a","Type":"ContainerDied","Data":"8dd7e323d290e23af89b58b973026eb2d4652523c3fcf75fdd3d2d58fa07241f"} Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.461729 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.655908 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb\") pod \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.656077 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc\") pod \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.656130 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb\") pod \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.656163 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config\") pod \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.656274 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwkm\" (UniqueName: \"kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm\") pod \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\" (UID: \"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a\") " Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.662425 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm" (OuterVolumeSpecName: "kube-api-access-rkwkm") pod "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" (UID: "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a"). InnerVolumeSpecName "kube-api-access-rkwkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.703151 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" (UID: "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.705967 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config" (OuterVolumeSpecName: "config") pod "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" (UID: "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.710118 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" (UID: "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.713588 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" (UID: "e7724d4f-e1aa-4e5c-b6e8-f478eedac62a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.757859 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.757895 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.757905 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.757914 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:46 crc kubenswrapper[4988]: I1123 08:15:46.757923 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwkm\" (UniqueName: \"kubernetes.io/projected/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a-kube-api-access-rkwkm\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.369182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" event={"ID":"e7724d4f-e1aa-4e5c-b6e8-f478eedac62a","Type":"ContainerDied","Data":"f94cc2eba734621f6a435b70698abae576be6a9611274fa10edf4b3f43db1f96"} Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.369294 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd4db9d9-vn8nb" Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.369307 4988 scope.go:117] "RemoveContainer" containerID="8dd7e323d290e23af89b58b973026eb2d4652523c3fcf75fdd3d2d58fa07241f" Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.406444 4988 scope.go:117] "RemoveContainer" containerID="b5bbfb8549c2c322b5c1072ae2b16c27cf4d4b6df916943c9f0d2183ac792c99" Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.422429 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.431385 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dd4db9d9-vn8nb"] Nov 23 08:15:47 crc kubenswrapper[4988]: I1123 08:15:47.496502 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:15:47 crc kubenswrapper[4988]: E1123 08:15:47.497400 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:15:48 crc kubenswrapper[4988]: I1123 08:15:48.514223 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" path="/var/lib/kubelet/pods/e7724d4f-e1aa-4e5c-b6e8-f478eedac62a/volumes" Nov 23 08:15:57 crc kubenswrapper[4988]: I1123 08:15:57.810339 4988 scope.go:117] "RemoveContainer" containerID="dee459d551881e51682810fe037fa610da2348033e95a2aa4b0379c69616100a" Nov 23 08:16:02 crc kubenswrapper[4988]: I1123 08:16:02.496699 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:16:02 crc kubenswrapper[4988]: E1123 08:16:02.497965 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:16:05 crc kubenswrapper[4988]: I1123 08:16:05.927637 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:16:08 crc kubenswrapper[4988]: I1123 08:16:08.925557 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-798d8dcd57-qzxxn" Nov 23 08:16:09 crc kubenswrapper[4988]: I1123 08:16:09.001846 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:16:09 crc kubenswrapper[4988]: I1123 08:16:09.002098 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c5f4f585b-cvg4g" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-api" containerID="cri-o://d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6" gracePeriod=30 Nov 23 08:16:09 crc kubenswrapper[4988]: I1123 08:16:09.002555 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c5f4f585b-cvg4g" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-httpd" containerID="cri-o://a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af" gracePeriod=30 Nov 23 08:16:09 crc kubenswrapper[4988]: I1123 08:16:09.609901 4988 generic.go:334] "Generic (PLEG): container finished" podID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerID="a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af" exitCode=0 Nov 23 08:16:09 crc kubenswrapper[4988]: I1123 08:16:09.610279 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerDied","Data":"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af"} Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.160314 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.260762 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config\") pod \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.261141 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs\") pod \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.261250 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle\") pod \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.261393 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gmvx\" (UniqueName: \"kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx\") pod \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.261485 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config\") pod \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\" (UID: \"db2d5e89-7c95-45dd-b271-44a23cb9a97c\") " Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.266918 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx" (OuterVolumeSpecName: "kube-api-access-9gmvx") pod "db2d5e89-7c95-45dd-b271-44a23cb9a97c" (UID: "db2d5e89-7c95-45dd-b271-44a23cb9a97c"). InnerVolumeSpecName "kube-api-access-9gmvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.269949 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "db2d5e89-7c95-45dd-b271-44a23cb9a97c" (UID: "db2d5e89-7c95-45dd-b271-44a23cb9a97c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.307962 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db2d5e89-7c95-45dd-b271-44a23cb9a97c" (UID: "db2d5e89-7c95-45dd-b271-44a23cb9a97c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.318982 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config" (OuterVolumeSpecName: "config") pod "db2d5e89-7c95-45dd-b271-44a23cb9a97c" (UID: "db2d5e89-7c95-45dd-b271-44a23cb9a97c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.346140 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "db2d5e89-7c95-45dd-b271-44a23cb9a97c" (UID: "db2d5e89-7c95-45dd-b271-44a23cb9a97c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.363446 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gmvx\" (UniqueName: \"kubernetes.io/projected/db2d5e89-7c95-45dd-b271-44a23cb9a97c-kube-api-access-9gmvx\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.363478 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.363488 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.363497 4988 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.363507 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db2d5e89-7c95-45dd-b271-44a23cb9a97c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.633424 4988 generic.go:334] "Generic (PLEG): container finished" podID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerID="d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6" exitCode=0 Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.633475 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerDied","Data":"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6"} Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.633545 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c5f4f585b-cvg4g" event={"ID":"db2d5e89-7c95-45dd-b271-44a23cb9a97c","Type":"ContainerDied","Data":"6541ea3ab451c5ba6e12a24fda52b93bfb4b1d1ec4142ccb225ade58cf9a33ae"} Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.633568 4988 scope.go:117] "RemoveContainer" containerID="a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.634082 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c5f4f585b-cvg4g" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.670600 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.677953 4988 scope.go:117] "RemoveContainer" containerID="d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.678407 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c5f4f585b-cvg4g"] Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.701619 4988 scope.go:117] "RemoveContainer" containerID="a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af" Nov 23 08:16:11 crc kubenswrapper[4988]: E1123 08:16:11.702171 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af\": container with ID starting with a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af not found: ID does not exist" containerID="a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.702352 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af"} err="failed to get container status \"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af\": rpc error: code = NotFound desc = could not find container \"a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af\": container with ID starting with a47fe789f50166d3ddca073f51601f1ca13b60ee1a15f607281160bef18db5af not found: ID does not exist" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.702383 4988 scope.go:117] "RemoveContainer" containerID="d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6" Nov 23 08:16:11 crc kubenswrapper[4988]: E1123 08:16:11.702861 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6\": container with ID starting with d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6 not found: ID does not exist" containerID="d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6" Nov 23 08:16:11 crc kubenswrapper[4988]: I1123 08:16:11.702884 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6"} err="failed to get container status \"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6\": rpc error: code = NotFound desc = could not find container \"d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6\": container with ID starting with d0eb304a96d10168b63d47ba5e041922068fefa75fdcc30ce70fabca933d3af6 not found: ID does not exist" Nov 23 08:16:12 crc kubenswrapper[4988]: I1123 08:16:12.509386 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" path="/var/lib/kubelet/pods/db2d5e89-7c95-45dd-b271-44a23cb9a97c/volumes" Nov 23 08:16:14 crc kubenswrapper[4988]: I1123 08:16:14.496115 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:16:14 crc kubenswrapper[4988]: E1123 08:16:14.496971 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.646760 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:25 crc kubenswrapper[4988]: E1123 08:16:25.648021 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="init" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648047 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="init" Nov 23 08:16:25 crc kubenswrapper[4988]: E1123 08:16:25.648101 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-api" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648121 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-api" Nov 23 08:16:25 crc kubenswrapper[4988]: E1123 08:16:25.648161 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-httpd" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648175 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-httpd" Nov 23 08:16:25 crc kubenswrapper[4988]: E1123 08:16:25.648242 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="dnsmasq-dns" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648255 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="dnsmasq-dns" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648543 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-httpd" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648568 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7724d4f-e1aa-4e5c-b6e8-f478eedac62a" containerName="dnsmasq-dns" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.648599 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2d5e89-7c95-45dd-b271-44a23cb9a97c" containerName="neutron-api" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.650805 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.664344 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.757478 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.757730 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.757777 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64f2p\" (UniqueName: \"kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.859163 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.859290 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.859309 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64f2p\" (UniqueName: \"kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.859782 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.859820 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.880335 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64f2p\" (UniqueName: \"kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p\") pod \"community-operators-6hkxx\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:25 crc kubenswrapper[4988]: I1123 08:16:25.986453 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:26 crc kubenswrapper[4988]: I1123 08:16:26.522931 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:26 crc kubenswrapper[4988]: I1123 08:16:26.836140 4988 generic.go:334] "Generic (PLEG): container finished" podID="614dd118-bb4e-42eb-a446-3a27872808c8" containerID="6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d" exitCode=0 Nov 23 08:16:26 crc kubenswrapper[4988]: I1123 08:16:26.836220 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerDied","Data":"6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d"} Nov 23 08:16:26 crc kubenswrapper[4988]: I1123 08:16:26.836247 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerStarted","Data":"3c2f89cfc6c05b978c344edd780e814e198caaa96ba34c94dd80e944281ecaf1"} Nov 23 08:16:27 crc kubenswrapper[4988]: I1123 08:16:27.496394 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:16:27 crc kubenswrapper[4988]: I1123 08:16:27.847631 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerStarted","Data":"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe"} Nov 23 08:16:27 crc kubenswrapper[4988]: I1123 08:16:27.851465 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c"} Nov 23 08:16:28 crc kubenswrapper[4988]: I1123 08:16:28.861013 4988 generic.go:334] "Generic (PLEG): container finished" podID="614dd118-bb4e-42eb-a446-3a27872808c8" containerID="1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe" exitCode=0 Nov 23 08:16:28 crc kubenswrapper[4988]: I1123 08:16:28.861102 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerDied","Data":"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe"} Nov 23 08:16:29 crc kubenswrapper[4988]: I1123 08:16:29.872336 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerStarted","Data":"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09"} Nov 23 08:16:29 crc kubenswrapper[4988]: I1123 08:16:29.893900 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hkxx" podStartSLOduration=2.4985429630000002 podStartE2EDuration="4.893879393s" podCreationTimestamp="2025-11-23 08:16:25 +0000 UTC" firstStartedPulling="2025-11-23 08:16:26.841979206 +0000 UTC m=+5439.150491969" lastFinishedPulling="2025-11-23 08:16:29.237315636 +0000 UTC m=+5441.545828399" observedRunningTime="2025-11-23 08:16:29.888491211 +0000 UTC m=+5442.197004014" watchObservedRunningTime="2025-11-23 08:16:29.893879393 +0000 UTC m=+5442.202392166" Nov 23 08:16:35 crc kubenswrapper[4988]: I1123 08:16:35.987368 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:35 crc kubenswrapper[4988]: I1123 08:16:35.988033 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:36 crc kubenswrapper[4988]: I1123 08:16:36.053239 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:37 crc kubenswrapper[4988]: I1123 08:16:37.075381 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:37 crc kubenswrapper[4988]: I1123 08:16:37.133387 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.017393 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hkxx" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="registry-server" containerID="cri-o://878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09" gracePeriod=2 Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.509028 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.676899 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content\") pod \"614dd118-bb4e-42eb-a446-3a27872808c8\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.677095 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities\") pod \"614dd118-bb4e-42eb-a446-3a27872808c8\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.677157 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64f2p\" (UniqueName: \"kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p\") pod \"614dd118-bb4e-42eb-a446-3a27872808c8\" (UID: \"614dd118-bb4e-42eb-a446-3a27872808c8\") " Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.678514 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities" (OuterVolumeSpecName: "utilities") pod "614dd118-bb4e-42eb-a446-3a27872808c8" (UID: "614dd118-bb4e-42eb-a446-3a27872808c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.679392 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.685486 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p" (OuterVolumeSpecName: "kube-api-access-64f2p") pod "614dd118-bb4e-42eb-a446-3a27872808c8" (UID: "614dd118-bb4e-42eb-a446-3a27872808c8"). InnerVolumeSpecName "kube-api-access-64f2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.747915 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "614dd118-bb4e-42eb-a446-3a27872808c8" (UID: "614dd118-bb4e-42eb-a446-3a27872808c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.781456 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614dd118-bb4e-42eb-a446-3a27872808c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:39 crc kubenswrapper[4988]: I1123 08:16:39.781730 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64f2p\" (UniqueName: \"kubernetes.io/projected/614dd118-bb4e-42eb-a446-3a27872808c8-kube-api-access-64f2p\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.026674 4988 generic.go:334] "Generic (PLEG): container finished" podID="614dd118-bb4e-42eb-a446-3a27872808c8" containerID="878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09" exitCode=0 Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.026716 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerDied","Data":"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09"} Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.026730 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkxx" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.026745 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkxx" event={"ID":"614dd118-bb4e-42eb-a446-3a27872808c8","Type":"ContainerDied","Data":"3c2f89cfc6c05b978c344edd780e814e198caaa96ba34c94dd80e944281ecaf1"} Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.026763 4988 scope.go:117] "RemoveContainer" containerID="878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.064211 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.077670 4988 scope.go:117] "RemoveContainer" containerID="1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.089921 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hkxx"] Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.113898 4988 scope.go:117] "RemoveContainer" containerID="6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.140543 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n5cwm"] Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.141107 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="extract-content" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.141124 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="extract-content" Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.141151 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="registry-server" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.141158 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="registry-server" Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.141211 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="extract-utilities" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.141219 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="extract-utilities" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.141399 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" containerName="registry-server" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.142002 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.153790 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.154018 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.154189 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-42gsb" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.154413 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.154512 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.160424 4988 scope.go:117] "RemoveContainer" containerID="878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09" Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.184653 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09\": container with ID starting with 878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09 not found: ID does not exist" containerID="878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.184704 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09"} err="failed to get container status \"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09\": rpc error: code = NotFound desc = could not find container \"878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09\": container with ID starting with 878f3b1f1934810ae46b6ec8b28c80e887bff23716b3a02f586fec98f8fecc09 not found: ID does not exist" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.184736 4988 scope.go:117] "RemoveContainer" containerID="1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe" Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.185338 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe\": container with ID starting with 1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe not found: ID does not exist" containerID="1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.185368 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe"} err="failed to get container status \"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe\": rpc error: code = NotFound desc = could not find container \"1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe\": container with ID starting with 1e7665371c74db0fa6b7d77a59a2d6b2af8b5ec480232d0312165ccbd13889fe not found: ID does not exist" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.185386 4988 scope.go:117] "RemoveContainer" containerID="6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d" Nov 23 08:16:40 crc kubenswrapper[4988]: E1123 08:16:40.187684 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d\": container with ID starting with 6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d not found: ID does not exist" containerID="6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.187715 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d"} err="failed to get container status \"6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d\": rpc error: code = NotFound desc = could not find container \"6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d\": container with ID starting with 6010224eae9bdb3bf659b9f399f660ec37738e26fe40e9e4e8e1966e603d305d not found: ID does not exist" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.199546 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n5cwm"] Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.224777 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.228868 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.246039 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294077 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294175 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjt4z\" (UniqueName: \"kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294342 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294404 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294454 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294618 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.294651 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396200 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396266 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396287 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396337 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396401 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396509 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396544 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396598 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396716 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwqlb\" (UniqueName: \"kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396727 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396807 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.396831 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjt4z\" (UniqueName: \"kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.397017 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.397121 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.404672 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.405143 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.407589 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.412841 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjt4z\" (UniqueName: \"kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z\") pod \"swift-ring-rebalance-n5cwm\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.497542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.497587 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwqlb\" (UniqueName: \"kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.497622 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.497658 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.497742 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.498590 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.498612 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.498737 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.499082 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.500477 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.505796 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614dd118-bb4e-42eb-a446-3a27872808c8" path="/var/lib/kubelet/pods/614dd118-bb4e-42eb-a446-3a27872808c8/volumes" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.514269 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwqlb\" (UniqueName: \"kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb\") pod \"dnsmasq-dns-5758f7685-d7fl9\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:40 crc kubenswrapper[4988]: I1123 08:16:40.547137 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:41 crc kubenswrapper[4988]: I1123 08:16:41.007151 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n5cwm"] Nov 23 08:16:41 crc kubenswrapper[4988]: W1123 08:16:41.013562 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff96809_3ec8_4406_a894_c22b4d13b94d.slice/crio-0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51 WatchSource:0}: Error finding container 0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51: Status 404 returned error can't find the container with id 0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51 Nov 23 08:16:41 crc kubenswrapper[4988]: I1123 08:16:41.036513 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n5cwm" event={"ID":"fff96809-3ec8-4406-a894-c22b4d13b94d","Type":"ContainerStarted","Data":"0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51"} Nov 23 08:16:41 crc kubenswrapper[4988]: W1123 08:16:41.109915 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7a1444_bb69_441e_a48a_37d458140666.slice/crio-9349fa918625e60eb1a5b7765f06125a362b6b820d4d072cb0f71e1a6ab9ec20 WatchSource:0}: Error finding container 9349fa918625e60eb1a5b7765f06125a362b6b820d4d072cb0f71e1a6ab9ec20: Status 404 returned error can't find the container with id 9349fa918625e60eb1a5b7765f06125a362b6b820d4d072cb0f71e1a6ab9ec20 Nov 23 08:16:41 crc kubenswrapper[4988]: I1123 08:16:41.110149 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:16:42 crc kubenswrapper[4988]: I1123 08:16:42.053012 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb7a1444-bb69-441e-a48a-37d458140666" containerID="6aa743aad47741f4ca839d4ce6c4409df16feb5b7b8732e1d3076d1f41fb7a50" exitCode=0 Nov 23 08:16:42 crc kubenswrapper[4988]: I1123 08:16:42.053384 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" event={"ID":"cb7a1444-bb69-441e-a48a-37d458140666","Type":"ContainerDied","Data":"6aa743aad47741f4ca839d4ce6c4409df16feb5b7b8732e1d3076d1f41fb7a50"} Nov 23 08:16:42 crc kubenswrapper[4988]: I1123 08:16:42.053414 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" event={"ID":"cb7a1444-bb69-441e-a48a-37d458140666","Type":"ContainerStarted","Data":"9349fa918625e60eb1a5b7765f06125a362b6b820d4d072cb0f71e1a6ab9ec20"} Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.066852 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" event={"ID":"cb7a1444-bb69-441e-a48a-37d458140666","Type":"ContainerStarted","Data":"86416b02d257d4661a16353032837967e9aa347f7ca1468b5de097232ad35e0f"} Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.067370 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.088121 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" podStartSLOduration=3.088105997 podStartE2EDuration="3.088105997s" podCreationTimestamp="2025-11-23 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:16:43.085441282 +0000 UTC m=+5455.393954075" watchObservedRunningTime="2025-11-23 08:16:43.088105997 +0000 UTC m=+5455.396618760" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.688815 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6b7c8d774d-kkpgx"] Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.690667 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.693785 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.693828 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.693917 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.701933 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b7c8d774d-kkpgx"] Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861474 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-config-data\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861748 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-etc-swift\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861779 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-log-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861795 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-public-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861811 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-combined-ca-bundle\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861858 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99krm\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-kube-api-access-99krm\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861881 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-internal-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.861903 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-run-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.962975 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-internal-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963040 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-run-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963156 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-config-data\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-etc-swift\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-log-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963259 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-public-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963282 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-combined-ca-bundle\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.963350 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99krm\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-kube-api-access-99krm\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.964870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-run-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.965703 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eaab950-47dc-48a7-8e3c-854cec2fab5f-log-httpd\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.969025 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-internal-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.970239 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-config-data\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.975900 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-public-tls-certs\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.976272 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-etc-swift\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.977119 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eaab950-47dc-48a7-8e3c-854cec2fab5f-combined-ca-bundle\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:43 crc kubenswrapper[4988]: I1123 08:16:43.978764 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99krm\" (UniqueName: \"kubernetes.io/projected/7eaab950-47dc-48a7-8e3c-854cec2fab5f-kube-api-access-99krm\") pod \"swift-proxy-6b7c8d774d-kkpgx\" (UID: \"7eaab950-47dc-48a7-8e3c-854cec2fab5f\") " pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:44 crc kubenswrapper[4988]: I1123 08:16:44.030323 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:45 crc kubenswrapper[4988]: I1123 08:16:45.084268 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n5cwm" event={"ID":"fff96809-3ec8-4406-a894-c22b4d13b94d","Type":"ContainerStarted","Data":"dcd4f55021040fde7bed02042cd8fa518da8c6bf61fff5e2f5e8005651e69c43"} Nov 23 08:16:45 crc kubenswrapper[4988]: I1123 08:16:45.105582 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n5cwm" podStartSLOduration=1.566080733 podStartE2EDuration="5.105568228s" podCreationTimestamp="2025-11-23 08:16:40 +0000 UTC" firstStartedPulling="2025-11-23 08:16:41.016298603 +0000 UTC m=+5453.324811366" lastFinishedPulling="2025-11-23 08:16:44.555786098 +0000 UTC m=+5456.864298861" observedRunningTime="2025-11-23 08:16:45.102833611 +0000 UTC m=+5457.411346374" watchObservedRunningTime="2025-11-23 08:16:45.105568228 +0000 UTC m=+5457.414080991" Nov 23 08:16:45 crc kubenswrapper[4988]: I1123 08:16:45.136061 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6b7c8d774d-kkpgx"] Nov 23 08:16:45 crc kubenswrapper[4988]: W1123 08:16:45.144448 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eaab950_47dc_48a7_8e3c_854cec2fab5f.slice/crio-714c630ca42a2592d582540366ef7bdecc8feeeb0de8c07dbe5922965f437ae7 WatchSource:0}: Error finding container 714c630ca42a2592d582540366ef7bdecc8feeeb0de8c07dbe5922965f437ae7: Status 404 returned error can't find the container with id 714c630ca42a2592d582540366ef7bdecc8feeeb0de8c07dbe5922965f437ae7 Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.110430 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" event={"ID":"7eaab950-47dc-48a7-8e3c-854cec2fab5f","Type":"ContainerStarted","Data":"9a869d6dd4cbb9e92399f3a72c94af6604fd4fbd58c060ef41bbfd84e655718f"} Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.111021 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.111032 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" event={"ID":"7eaab950-47dc-48a7-8e3c-854cec2fab5f","Type":"ContainerStarted","Data":"42fdac784e9e934decfc2955ff00864a9404da11d737c9e79f3b7029869863c3"} Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.111042 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.111050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" event={"ID":"7eaab950-47dc-48a7-8e3c-854cec2fab5f","Type":"ContainerStarted","Data":"714c630ca42a2592d582540366ef7bdecc8feeeb0de8c07dbe5922965f437ae7"} Nov 23 08:16:46 crc kubenswrapper[4988]: I1123 08:16:46.142179 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" podStartSLOduration=3.142164307 podStartE2EDuration="3.142164307s" podCreationTimestamp="2025-11-23 08:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:16:46.138488187 +0000 UTC m=+5458.447000940" watchObservedRunningTime="2025-11-23 08:16:46.142164307 +0000 UTC m=+5458.450677070" Nov 23 08:16:49 crc kubenswrapper[4988]: I1123 08:16:49.138240 4988 generic.go:334] "Generic (PLEG): container finished" podID="fff96809-3ec8-4406-a894-c22b4d13b94d" containerID="dcd4f55021040fde7bed02042cd8fa518da8c6bf61fff5e2f5e8005651e69c43" exitCode=0 Nov 23 08:16:49 crc kubenswrapper[4988]: I1123 08:16:49.138363 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n5cwm" event={"ID":"fff96809-3ec8-4406-a894-c22b4d13b94d","Type":"ContainerDied","Data":"dcd4f55021040fde7bed02042cd8fa518da8c6bf61fff5e2f5e8005651e69c43"} Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.494661 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.549288 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592747 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592831 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592856 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592887 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592971 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjt4z\" (UniqueName: \"kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.592992 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.593042 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf\") pod \"fff96809-3ec8-4406-a894-c22b4d13b94d\" (UID: \"fff96809-3ec8-4406-a894-c22b4d13b94d\") " Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.600992 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.602515 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.602736 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="dnsmasq-dns" containerID="cri-o://313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9" gracePeriod=10 Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.604157 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.610492 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z" (OuterVolumeSpecName: "kube-api-access-qjt4z") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "kube-api-access-qjt4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.642603 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.644251 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.654365 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.671698 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts" (OuterVolumeSpecName: "scripts") pod "fff96809-3ec8-4406-a894-c22b4d13b94d" (UID: "fff96809-3ec8-4406-a894-c22b4d13b94d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695148 4988 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695184 4988 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fff96809-3ec8-4406-a894-c22b4d13b94d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695205 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695216 4988 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fff96809-3ec8-4406-a894-c22b4d13b94d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695227 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjt4z\" (UniqueName: \"kubernetes.io/projected/fff96809-3ec8-4406-a894-c22b4d13b94d-kube-api-access-qjt4z\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695235 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:50 crc kubenswrapper[4988]: I1123 08:16:50.695243 4988 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fff96809-3ec8-4406-a894-c22b4d13b94d-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.003778 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.102100 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqcjg\" (UniqueName: \"kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg\") pod \"05041a18-d989-44c5-a04d-9d4836ae59be\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.102208 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb\") pod \"05041a18-d989-44c5-a04d-9d4836ae59be\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.102242 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc\") pod \"05041a18-d989-44c5-a04d-9d4836ae59be\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.102263 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config\") pod \"05041a18-d989-44c5-a04d-9d4836ae59be\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.102295 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb\") pod \"05041a18-d989-44c5-a04d-9d4836ae59be\" (UID: \"05041a18-d989-44c5-a04d-9d4836ae59be\") " Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.113480 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg" (OuterVolumeSpecName: "kube-api-access-gqcjg") pod "05041a18-d989-44c5-a04d-9d4836ae59be" (UID: "05041a18-d989-44c5-a04d-9d4836ae59be"). InnerVolumeSpecName "kube-api-access-gqcjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.156817 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05041a18-d989-44c5-a04d-9d4836ae59be" (UID: "05041a18-d989-44c5-a04d-9d4836ae59be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.159677 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05041a18-d989-44c5-a04d-9d4836ae59be" (UID: "05041a18-d989-44c5-a04d-9d4836ae59be"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.160928 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config" (OuterVolumeSpecName: "config") pod "05041a18-d989-44c5-a04d-9d4836ae59be" (UID: "05041a18-d989-44c5-a04d-9d4836ae59be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.161433 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05041a18-d989-44c5-a04d-9d4836ae59be" (UID: "05041a18-d989-44c5-a04d-9d4836ae59be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.163542 4988 generic.go:334] "Generic (PLEG): container finished" podID="05041a18-d989-44c5-a04d-9d4836ae59be" containerID="313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9" exitCode=0 Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.163631 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" event={"ID":"05041a18-d989-44c5-a04d-9d4836ae59be","Type":"ContainerDied","Data":"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9"} Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.163666 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" event={"ID":"05041a18-d989-44c5-a04d-9d4836ae59be","Type":"ContainerDied","Data":"6927094c8ef0a711ffbcad81162a71369a2c6d9ecd991feab5bce1403cccbdec"} Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.163649 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.163684 4988 scope.go:117] "RemoveContainer" containerID="313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.167632 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n5cwm" event={"ID":"fff96809-3ec8-4406-a894-c22b4d13b94d","Type":"ContainerDied","Data":"0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51"} Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.167669 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8c04f232ddfa5c5a898b8662de95e483ad3068f9c646a015f2138fcf83dc51" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.167748 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n5cwm" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.200447 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.200651 4988 scope.go:117] "RemoveContainer" containerID="827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.204035 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.204055 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.204065 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.204075 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqcjg\" (UniqueName: \"kubernetes.io/projected/05041a18-d989-44c5-a04d-9d4836ae59be-kube-api-access-gqcjg\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.204084 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05041a18-d989-44c5-a04d-9d4836ae59be-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.212487 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cbc6587fc-rjddp"] Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.235790 4988 scope.go:117] "RemoveContainer" containerID="313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9" Nov 23 08:16:51 crc kubenswrapper[4988]: E1123 08:16:51.236144 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9\": container with ID starting with 313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9 not found: ID does not exist" containerID="313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.236242 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9"} err="failed to get container status \"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9\": rpc error: code = NotFound desc = could not find container \"313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9\": container with ID starting with 313d74abd3fc8eb08bd0ea12d0e4597ef87aa403fe13bc023e279c43de2fead9 not found: ID does not exist" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.236283 4988 scope.go:117] "RemoveContainer" containerID="827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd" Nov 23 08:16:51 crc kubenswrapper[4988]: E1123 08:16:51.236816 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd\": container with ID starting with 827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd not found: ID does not exist" containerID="827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd" Nov 23 08:16:51 crc kubenswrapper[4988]: I1123 08:16:51.236849 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd"} err="failed to get container status \"827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd\": rpc error: code = NotFound desc = could not find container \"827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd\": container with ID starting with 827f9912091cc7ee7bc3428cbecdde274ee51d5f8042c72154336c56681093fd not found: ID does not exist" Nov 23 08:16:52 crc kubenswrapper[4988]: I1123 08:16:52.508923 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" path="/var/lib/kubelet/pods/05041a18-d989-44c5-a04d-9d4836ae59be/volumes" Nov 23 08:16:54 crc kubenswrapper[4988]: I1123 08:16:54.037610 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:54 crc kubenswrapper[4988]: I1123 08:16:54.041014 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6b7c8d774d-kkpgx" Nov 23 08:16:55 crc kubenswrapper[4988]: I1123 08:16:55.846940 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cbc6587fc-rjddp" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.37:5353: i/o timeout" Nov 23 08:16:57 crc kubenswrapper[4988]: I1123 08:16:57.906902 4988 scope.go:117] "RemoveContainer" containerID="77d262c74af6f5d8d3713208142e3792fc556ea9bf008c9484f153ac8278fb1c" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.161325 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6plwr"] Nov 23 08:17:00 crc kubenswrapper[4988]: E1123 08:17:00.161943 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="init" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.161956 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="init" Nov 23 08:17:00 crc kubenswrapper[4988]: E1123 08:17:00.161967 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff96809-3ec8-4406-a894-c22b4d13b94d" containerName="swift-ring-rebalance" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.161973 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff96809-3ec8-4406-a894-c22b4d13b94d" containerName="swift-ring-rebalance" Nov 23 08:17:00 crc kubenswrapper[4988]: E1123 08:17:00.161981 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="dnsmasq-dns" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.161987 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="dnsmasq-dns" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.162138 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff96809-3ec8-4406-a894-c22b4d13b94d" containerName="swift-ring-rebalance" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.162151 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="05041a18-d989-44c5-a04d-9d4836ae59be" containerName="dnsmasq-dns" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.162759 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.173974 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6plwr"] Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.267922 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db01-account-create-vchh9"] Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.269603 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.271605 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.274755 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db01-account-create-vchh9"] Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.307286 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.307528 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn25p\" (UniqueName: \"kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.410085 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.410167 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.410273 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhpkf\" (UniqueName: \"kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.410484 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn25p\" (UniqueName: \"kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.411050 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.432383 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn25p\" (UniqueName: \"kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p\") pod \"cinder-db-create-6plwr\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.480173 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.514528 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.514606 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhpkf\" (UniqueName: \"kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.515360 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.545601 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhpkf\" (UniqueName: \"kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf\") pod \"cinder-db01-account-create-vchh9\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.590036 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:00 crc kubenswrapper[4988]: I1123 08:17:00.919814 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6plwr"] Nov 23 08:17:00 crc kubenswrapper[4988]: W1123 08:17:00.928360 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod443a70aa_cf7b_4a3a_a970_b3de700daf85.slice/crio-930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb WatchSource:0}: Error finding container 930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb: Status 404 returned error can't find the container with id 930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb Nov 23 08:17:01 crc kubenswrapper[4988]: I1123 08:17:01.130287 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db01-account-create-vchh9"] Nov 23 08:17:01 crc kubenswrapper[4988]: I1123 08:17:01.270849 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6plwr" event={"ID":"443a70aa-cf7b-4a3a-a970-b3de700daf85","Type":"ContainerStarted","Data":"56f27d4e11d78b4ccb0f647ca76cb167f7a0ad11b1384e7efe3cd00325cfd6e1"} Nov 23 08:17:01 crc kubenswrapper[4988]: I1123 08:17:01.270889 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6plwr" event={"ID":"443a70aa-cf7b-4a3a-a970-b3de700daf85","Type":"ContainerStarted","Data":"930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb"} Nov 23 08:17:01 crc kubenswrapper[4988]: I1123 08:17:01.273378 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db01-account-create-vchh9" event={"ID":"f4c814f2-f0b8-497b-b2bf-8b699805f073","Type":"ContainerStarted","Data":"4ef6b75b937f89e74f0680ab7c8e527b4d4f73600c3f8cc1feb9a09555798ded"} Nov 23 08:17:01 crc kubenswrapper[4988]: I1123 08:17:01.286069 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-6plwr" podStartSLOduration=1.286053721 podStartE2EDuration="1.286053721s" podCreationTimestamp="2025-11-23 08:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:17:01.283697593 +0000 UTC m=+5473.592210356" watchObservedRunningTime="2025-11-23 08:17:01.286053721 +0000 UTC m=+5473.594566484" Nov 23 08:17:02 crc kubenswrapper[4988]: I1123 08:17:02.288361 4988 generic.go:334] "Generic (PLEG): container finished" podID="443a70aa-cf7b-4a3a-a970-b3de700daf85" containerID="56f27d4e11d78b4ccb0f647ca76cb167f7a0ad11b1384e7efe3cd00325cfd6e1" exitCode=0 Nov 23 08:17:02 crc kubenswrapper[4988]: I1123 08:17:02.288454 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6plwr" event={"ID":"443a70aa-cf7b-4a3a-a970-b3de700daf85","Type":"ContainerDied","Data":"56f27d4e11d78b4ccb0f647ca76cb167f7a0ad11b1384e7efe3cd00325cfd6e1"} Nov 23 08:17:02 crc kubenswrapper[4988]: I1123 08:17:02.295340 4988 generic.go:334] "Generic (PLEG): container finished" podID="f4c814f2-f0b8-497b-b2bf-8b699805f073" containerID="51bdd03fbeec896dea5693a259e5bcd80ea5e3e268475c0faf1c57add910ee6c" exitCode=0 Nov 23 08:17:02 crc kubenswrapper[4988]: I1123 08:17:02.295410 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db01-account-create-vchh9" event={"ID":"f4c814f2-f0b8-497b-b2bf-8b699805f073","Type":"ContainerDied","Data":"51bdd03fbeec896dea5693a259e5bcd80ea5e3e268475c0faf1c57add910ee6c"} Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.845844 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.852377 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.986923 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn25p\" (UniqueName: \"kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p\") pod \"443a70aa-cf7b-4a3a-a970-b3de700daf85\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.987026 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts\") pod \"f4c814f2-f0b8-497b-b2bf-8b699805f073\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.987065 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhpkf\" (UniqueName: \"kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf\") pod \"f4c814f2-f0b8-497b-b2bf-8b699805f073\" (UID: \"f4c814f2-f0b8-497b-b2bf-8b699805f073\") " Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.987123 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts\") pod \"443a70aa-cf7b-4a3a-a970-b3de700daf85\" (UID: \"443a70aa-cf7b-4a3a-a970-b3de700daf85\") " Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.987956 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4c814f2-f0b8-497b-b2bf-8b699805f073" (UID: "f4c814f2-f0b8-497b-b2bf-8b699805f073"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.988324 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "443a70aa-cf7b-4a3a-a970-b3de700daf85" (UID: "443a70aa-cf7b-4a3a-a970-b3de700daf85"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.995081 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p" (OuterVolumeSpecName: "kube-api-access-zn25p") pod "443a70aa-cf7b-4a3a-a970-b3de700daf85" (UID: "443a70aa-cf7b-4a3a-a970-b3de700daf85"). InnerVolumeSpecName "kube-api-access-zn25p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:17:03 crc kubenswrapper[4988]: I1123 08:17:03.998539 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf" (OuterVolumeSpecName: "kube-api-access-qhpkf") pod "f4c814f2-f0b8-497b-b2bf-8b699805f073" (UID: "f4c814f2-f0b8-497b-b2bf-8b699805f073"). InnerVolumeSpecName "kube-api-access-qhpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.089229 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn25p\" (UniqueName: \"kubernetes.io/projected/443a70aa-cf7b-4a3a-a970-b3de700daf85-kube-api-access-zn25p\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.089257 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c814f2-f0b8-497b-b2bf-8b699805f073-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.089266 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhpkf\" (UniqueName: \"kubernetes.io/projected/f4c814f2-f0b8-497b-b2bf-8b699805f073-kube-api-access-qhpkf\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.089275 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/443a70aa-cf7b-4a3a-a970-b3de700daf85-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.331480 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db01-account-create-vchh9" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.331492 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db01-account-create-vchh9" event={"ID":"f4c814f2-f0b8-497b-b2bf-8b699805f073","Type":"ContainerDied","Data":"4ef6b75b937f89e74f0680ab7c8e527b4d4f73600c3f8cc1feb9a09555798ded"} Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.331551 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ef6b75b937f89e74f0680ab7c8e527b4d4f73600c3f8cc1feb9a09555798ded" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.334655 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6plwr" event={"ID":"443a70aa-cf7b-4a3a-a970-b3de700daf85","Type":"ContainerDied","Data":"930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb"} Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.334795 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930a9127a262d80f0df0ee5d8e5fc42a5a6bacd720595a77d8deeb23e9c9a7cb" Nov 23 08:17:04 crc kubenswrapper[4988]: I1123 08:17:04.335012 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6plwr" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.403273 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-rrjhf"] Nov 23 08:17:05 crc kubenswrapper[4988]: E1123 08:17:05.404004 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c814f2-f0b8-497b-b2bf-8b699805f073" containerName="mariadb-account-create" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.404019 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c814f2-f0b8-497b-b2bf-8b699805f073" containerName="mariadb-account-create" Nov 23 08:17:05 crc kubenswrapper[4988]: E1123 08:17:05.404053 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="443a70aa-cf7b-4a3a-a970-b3de700daf85" containerName="mariadb-database-create" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.404060 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="443a70aa-cf7b-4a3a-a970-b3de700daf85" containerName="mariadb-database-create" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.404255 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="443a70aa-cf7b-4a3a-a970-b3de700daf85" containerName="mariadb-database-create" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.404270 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c814f2-f0b8-497b-b2bf-8b699805f073" containerName="mariadb-account-create" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.405011 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.407650 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7c7pb" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.409288 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.414760 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.420954 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rrjhf"] Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520188 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520308 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520362 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzbl\" (UniqueName: \"kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520391 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520660 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.520722 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.623915 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzbl\" (UniqueName: \"kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.624104 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.624181 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.624431 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.624494 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.624727 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.625406 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.629965 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.630789 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.631618 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.644176 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.646081 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzbl\" (UniqueName: \"kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl\") pod \"cinder-db-sync-rrjhf\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:05 crc kubenswrapper[4988]: I1123 08:17:05.728980 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:06 crc kubenswrapper[4988]: W1123 08:17:06.190245 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod247f0a58_e456_4673_8ca8_15e12ab2af71.slice/crio-11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc WatchSource:0}: Error finding container 11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc: Status 404 returned error can't find the container with id 11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc Nov 23 08:17:06 crc kubenswrapper[4988]: I1123 08:17:06.199093 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-rrjhf"] Nov 23 08:17:06 crc kubenswrapper[4988]: I1123 08:17:06.357666 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rrjhf" event={"ID":"247f0a58-e456-4673-8ca8-15e12ab2af71","Type":"ContainerStarted","Data":"11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc"} Nov 23 08:17:26 crc kubenswrapper[4988]: I1123 08:17:26.547147 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rrjhf" event={"ID":"247f0a58-e456-4673-8ca8-15e12ab2af71","Type":"ContainerStarted","Data":"e1ddf054127b80092e1a5db2964fcf6d6feb96a6cadb5a47eee4e9d2198bb2d4"} Nov 23 08:17:26 crc kubenswrapper[4988]: I1123 08:17:26.573397 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-rrjhf" podStartSLOduration=2.012716565 podStartE2EDuration="21.573380138s" podCreationTimestamp="2025-11-23 08:17:05 +0000 UTC" firstStartedPulling="2025-11-23 08:17:06.192639612 +0000 UTC m=+5478.501152375" lastFinishedPulling="2025-11-23 08:17:25.753303185 +0000 UTC m=+5498.061815948" observedRunningTime="2025-11-23 08:17:26.567089624 +0000 UTC m=+5498.875602397" watchObservedRunningTime="2025-11-23 08:17:26.573380138 +0000 UTC m=+5498.881892901" Nov 23 08:17:29 crc kubenswrapper[4988]: I1123 08:17:29.605547 4988 generic.go:334] "Generic (PLEG): container finished" podID="247f0a58-e456-4673-8ca8-15e12ab2af71" containerID="e1ddf054127b80092e1a5db2964fcf6d6feb96a6cadb5a47eee4e9d2198bb2d4" exitCode=0 Nov 23 08:17:29 crc kubenswrapper[4988]: I1123 08:17:29.605653 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rrjhf" event={"ID":"247f0a58-e456-4673-8ca8-15e12ab2af71","Type":"ContainerDied","Data":"e1ddf054127b80092e1a5db2964fcf6d6feb96a6cadb5a47eee4e9d2198bb2d4"} Nov 23 08:17:30 crc kubenswrapper[4988]: I1123 08:17:30.957787 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122186 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122283 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122426 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122476 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122517 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzbl\" (UniqueName: \"kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122537 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data\") pod \"247f0a58-e456-4673-8ca8-15e12ab2af71\" (UID: \"247f0a58-e456-4673-8ca8-15e12ab2af71\") " Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122641 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.122963 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/247f0a58-e456-4673-8ca8-15e12ab2af71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.127106 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl" (OuterVolumeSpecName: "kube-api-access-rjzbl") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "kube-api-access-rjzbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.127552 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts" (OuterVolumeSpecName: "scripts") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.127611 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.145909 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.183255 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data" (OuterVolumeSpecName: "config-data") pod "247f0a58-e456-4673-8ca8-15e12ab2af71" (UID: "247f0a58-e456-4673-8ca8-15e12ab2af71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.224088 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.224131 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.224145 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.224157 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzbl\" (UniqueName: \"kubernetes.io/projected/247f0a58-e456-4673-8ca8-15e12ab2af71-kube-api-access-rjzbl\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.224170 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/247f0a58-e456-4673-8ca8-15e12ab2af71-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.629007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-rrjhf" event={"ID":"247f0a58-e456-4673-8ca8-15e12ab2af71","Type":"ContainerDied","Data":"11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc"} Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.629046 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a0753f0be258df5f0fd01d7e162c5ef403e032ee2f107b2adef90a018c32fc" Nov 23 08:17:31 crc kubenswrapper[4988]: I1123 08:17:31.629080 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-rrjhf" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.084869 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:17:32 crc kubenswrapper[4988]: E1123 08:17:32.085190 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247f0a58-e456-4673-8ca8-15e12ab2af71" containerName="cinder-db-sync" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.085204 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="247f0a58-e456-4673-8ca8-15e12ab2af71" containerName="cinder-db-sync" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.085405 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="247f0a58-e456-4673-8ca8-15e12ab2af71" containerName="cinder-db-sync" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.086310 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.102737 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.246164 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.246402 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.246459 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtqj\" (UniqueName: \"kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.246491 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.246679 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.257832 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.259865 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.268803 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.268975 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.270785 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.273201 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7c7pb" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.278960 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.348534 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.348595 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtqj\" (UniqueName: \"kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.348634 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.348675 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.348730 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.349666 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.349730 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.349743 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.349764 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.363864 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtqj\" (UniqueName: \"kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj\") pod \"dnsmasq-dns-5d6d5c6489-289lx\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.409635 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.450463 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451015 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451081 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451106 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqc2\" (UniqueName: \"kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451467 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.451523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554018 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554311 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554359 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554418 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqc2\" (UniqueName: \"kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554681 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554713 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.554785 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.555266 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.558041 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.562737 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.565284 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.570330 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.572218 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.574410 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqc2\" (UniqueName: \"kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2\") pod \"cinder-api-0\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.579816 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:32 crc kubenswrapper[4988]: I1123 08:17:32.884666 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:17:33 crc kubenswrapper[4988]: I1123 08:17:33.114045 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:33 crc kubenswrapper[4988]: W1123 08:17:33.152459 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod075caff1_e649_44f2_8bcf_c8fe3c11f197.slice/crio-f7ea666fcf35163236a1badcd757e3c5106ec43edd2a26e4131918f5ad82e2de WatchSource:0}: Error finding container f7ea666fcf35163236a1badcd757e3c5106ec43edd2a26e4131918f5ad82e2de: Status 404 returned error can't find the container with id f7ea666fcf35163236a1badcd757e3c5106ec43edd2a26e4131918f5ad82e2de Nov 23 08:17:33 crc kubenswrapper[4988]: I1123 08:17:33.654317 4988 generic.go:334] "Generic (PLEG): container finished" podID="006175e8-8110-46ec-9d88-102e64fc6360" containerID="1abf6ca90f76b21c6aaef98d7c9937cccf4d800732637b3a09695f8adf25fe19" exitCode=0 Nov 23 08:17:33 crc kubenswrapper[4988]: I1123 08:17:33.654401 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" event={"ID":"006175e8-8110-46ec-9d88-102e64fc6360","Type":"ContainerDied","Data":"1abf6ca90f76b21c6aaef98d7c9937cccf4d800732637b3a09695f8adf25fe19"} Nov 23 08:17:33 crc kubenswrapper[4988]: I1123 08:17:33.654436 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" event={"ID":"006175e8-8110-46ec-9d88-102e64fc6360","Type":"ContainerStarted","Data":"65bec2d254938823b6e292f1746ca011c13a849829d9831023cdd0d9f85978cb"} Nov 23 08:17:33 crc kubenswrapper[4988]: I1123 08:17:33.655799 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerStarted","Data":"f7ea666fcf35163236a1badcd757e3c5106ec43edd2a26e4131918f5ad82e2de"} Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.449398 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.666047 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" event={"ID":"006175e8-8110-46ec-9d88-102e64fc6360","Type":"ContainerStarted","Data":"2663454b1095eee8ac4bab0b16d1bec867024229e2296a963adcada156f57978"} Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.666122 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.668391 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerStarted","Data":"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0"} Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.668435 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerStarted","Data":"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab"} Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.668546 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.712739 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.712720906 podStartE2EDuration="2.712720906s" podCreationTimestamp="2025-11-23 08:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:17:34.708217886 +0000 UTC m=+5507.016730659" watchObservedRunningTime="2025-11-23 08:17:34.712720906 +0000 UTC m=+5507.021233669" Nov 23 08:17:34 crc kubenswrapper[4988]: I1123 08:17:34.714520 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" podStartSLOduration=2.71451022 podStartE2EDuration="2.71451022s" podCreationTimestamp="2025-11-23 08:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:17:34.691627562 +0000 UTC m=+5507.000140335" watchObservedRunningTime="2025-11-23 08:17:34.71451022 +0000 UTC m=+5507.023022973" Nov 23 08:17:35 crc kubenswrapper[4988]: I1123 08:17:35.679532 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api-log" containerID="cri-o://9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" gracePeriod=30 Nov 23 08:17:35 crc kubenswrapper[4988]: I1123 08:17:35.679579 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api" containerID="cri-o://53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" gracePeriod=30 Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.263132 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422269 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422491 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422545 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422596 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422652 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422707 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqc2\" (UniqueName: \"kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422771 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle\") pod \"075caff1-e649-44f2-8bcf-c8fe3c11f197\" (UID: \"075caff1-e649-44f2-8bcf-c8fe3c11f197\") " Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422842 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.422922 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs" (OuterVolumeSpecName: "logs") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.424020 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/075caff1-e649-44f2-8bcf-c8fe3c11f197-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.424059 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/075caff1-e649-44f2-8bcf-c8fe3c11f197-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.428493 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.430496 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts" (OuterVolumeSpecName: "scripts") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.435606 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2" (OuterVolumeSpecName: "kube-api-access-hvqc2") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "kube-api-access-hvqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.461692 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.504059 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data" (OuterVolumeSpecName: "config-data") pod "075caff1-e649-44f2-8bcf-c8fe3c11f197" (UID: "075caff1-e649-44f2-8bcf-c8fe3c11f197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.525696 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.525726 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.525750 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqc2\" (UniqueName: \"kubernetes.io/projected/075caff1-e649-44f2-8bcf-c8fe3c11f197-kube-api-access-hvqc2\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.525764 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.525772 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/075caff1-e649-44f2-8bcf-c8fe3c11f197-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691503 4988 generic.go:334] "Generic (PLEG): container finished" podID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerID="53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" exitCode=0 Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691534 4988 generic.go:334] "Generic (PLEG): container finished" podID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerID="9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" exitCode=143 Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691556 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerDied","Data":"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0"} Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691582 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerDied","Data":"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab"} Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691592 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"075caff1-e649-44f2-8bcf-c8fe3c11f197","Type":"ContainerDied","Data":"f7ea666fcf35163236a1badcd757e3c5106ec43edd2a26e4131918f5ad82e2de"} Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691684 4988 scope.go:117] "RemoveContainer" containerID="53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.691815 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.720734 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.732694 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.744761 4988 scope.go:117] "RemoveContainer" containerID="9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.745463 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:36 crc kubenswrapper[4988]: E1123 08:17:36.745851 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api-log" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.745929 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api-log" Nov 23 08:17:36 crc kubenswrapper[4988]: E1123 08:17:36.746015 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.746066 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.746298 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api-log" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.746386 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" containerName="cinder-api" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.747311 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.752563 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.752773 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.752890 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.753233 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7c7pb" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.753465 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.754212 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.767959 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.783935 4988 scope.go:117] "RemoveContainer" containerID="53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" Nov 23 08:17:36 crc kubenswrapper[4988]: E1123 08:17:36.784943 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0\": container with ID starting with 53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0 not found: ID does not exist" containerID="53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.785012 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0"} err="failed to get container status \"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0\": rpc error: code = NotFound desc = could not find container \"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0\": container with ID starting with 53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0 not found: ID does not exist" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.785054 4988 scope.go:117] "RemoveContainer" containerID="9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" Nov 23 08:17:36 crc kubenswrapper[4988]: E1123 08:17:36.785818 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab\": container with ID starting with 9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab not found: ID does not exist" containerID="9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.785860 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab"} err="failed to get container status \"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab\": rpc error: code = NotFound desc = could not find container \"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab\": container with ID starting with 9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab not found: ID does not exist" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.785888 4988 scope.go:117] "RemoveContainer" containerID="53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.786586 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0"} err="failed to get container status \"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0\": rpc error: code = NotFound desc = could not find container \"53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0\": container with ID starting with 53ef78150647494485c8f89084a04470086748ded8af532c041d03f82eae7bb0 not found: ID does not exist" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.786647 4988 scope.go:117] "RemoveContainer" containerID="9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.787946 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab"} err="failed to get container status \"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab\": rpc error: code = NotFound desc = could not find container \"9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab\": container with ID starting with 9925a333fad7bb17b72ec3d69a3654a5889a8c6b257f19543ac2d22de02ae4ab not found: ID does not exist" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.831849 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.831931 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.831965 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft828\" (UniqueName: \"kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832058 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832437 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832488 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832747 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.832799 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934462 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934548 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934577 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft828\" (UniqueName: \"kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934604 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934627 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934674 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.934748 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.935192 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.936483 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.939021 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.939325 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.940907 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.941481 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.941500 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.947531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:36 crc kubenswrapper[4988]: I1123 08:17:36.960199 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft828\" (UniqueName: \"kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828\") pod \"cinder-api-0\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " pod="openstack/cinder-api-0" Nov 23 08:17:37 crc kubenswrapper[4988]: I1123 08:17:37.139515 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:17:37 crc kubenswrapper[4988]: I1123 08:17:37.624914 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:17:37 crc kubenswrapper[4988]: I1123 08:17:37.704962 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerStarted","Data":"2c330ff5915581590e93fda61e577b464b2c72e793b23d8d32a2a7c16d379ebb"} Nov 23 08:17:38 crc kubenswrapper[4988]: I1123 08:17:38.518146 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075caff1-e649-44f2-8bcf-c8fe3c11f197" path="/var/lib/kubelet/pods/075caff1-e649-44f2-8bcf-c8fe3c11f197/volumes" Nov 23 08:17:38 crc kubenswrapper[4988]: I1123 08:17:38.722329 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerStarted","Data":"35405be6c4073bd44b16b578292566aa28007bac3a0c7d5864816e1c4cb792e3"} Nov 23 08:17:39 crc kubenswrapper[4988]: I1123 08:17:39.733122 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerStarted","Data":"c40a2812f074cdacc0f74f9ec5faab5976b3857f4adc9416b758712fe60a1f21"} Nov 23 08:17:39 crc kubenswrapper[4988]: I1123 08:17:39.734563 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 08:17:39 crc kubenswrapper[4988]: I1123 08:17:39.764517 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.764500452 podStartE2EDuration="3.764500452s" podCreationTimestamp="2025-11-23 08:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:17:39.761536949 +0000 UTC m=+5512.070049712" watchObservedRunningTime="2025-11-23 08:17:39.764500452 +0000 UTC m=+5512.073013225" Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.411592 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.479131 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.479457 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="dnsmasq-dns" containerID="cri-o://86416b02d257d4661a16353032837967e9aa347f7ca1468b5de097232ad35e0f" gracePeriod=10 Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.764094 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb7a1444-bb69-441e-a48a-37d458140666" containerID="86416b02d257d4661a16353032837967e9aa347f7ca1468b5de097232ad35e0f" exitCode=0 Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.764129 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" event={"ID":"cb7a1444-bb69-441e-a48a-37d458140666","Type":"ContainerDied","Data":"86416b02d257d4661a16353032837967e9aa347f7ca1468b5de097232ad35e0f"} Nov 23 08:17:42 crc kubenswrapper[4988]: I1123 08:17:42.958085 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.004351 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config\") pod \"cb7a1444-bb69-441e-a48a-37d458140666\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.004712 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb\") pod \"cb7a1444-bb69-441e-a48a-37d458140666\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.004767 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb\") pod \"cb7a1444-bb69-441e-a48a-37d458140666\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.004813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwqlb\" (UniqueName: \"kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb\") pod \"cb7a1444-bb69-441e-a48a-37d458140666\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.004890 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc\") pod \"cb7a1444-bb69-441e-a48a-37d458140666\" (UID: \"cb7a1444-bb69-441e-a48a-37d458140666\") " Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.010889 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb" (OuterVolumeSpecName: "kube-api-access-jwqlb") pod "cb7a1444-bb69-441e-a48a-37d458140666" (UID: "cb7a1444-bb69-441e-a48a-37d458140666"). InnerVolumeSpecName "kube-api-access-jwqlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.054419 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config" (OuterVolumeSpecName: "config") pod "cb7a1444-bb69-441e-a48a-37d458140666" (UID: "cb7a1444-bb69-441e-a48a-37d458140666"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.054780 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb7a1444-bb69-441e-a48a-37d458140666" (UID: "cb7a1444-bb69-441e-a48a-37d458140666"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.054886 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb7a1444-bb69-441e-a48a-37d458140666" (UID: "cb7a1444-bb69-441e-a48a-37d458140666"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.060820 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb7a1444-bb69-441e-a48a-37d458140666" (UID: "cb7a1444-bb69-441e-a48a-37d458140666"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.106462 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.106500 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.106513 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.106526 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwqlb\" (UniqueName: \"kubernetes.io/projected/cb7a1444-bb69-441e-a48a-37d458140666-kube-api-access-jwqlb\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.106538 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7a1444-bb69-441e-a48a-37d458140666-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.775326 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" event={"ID":"cb7a1444-bb69-441e-a48a-37d458140666","Type":"ContainerDied","Data":"9349fa918625e60eb1a5b7765f06125a362b6b820d4d072cb0f71e1a6ab9ec20"} Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.775393 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5758f7685-d7fl9" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.776766 4988 scope.go:117] "RemoveContainer" containerID="86416b02d257d4661a16353032837967e9aa347f7ca1468b5de097232ad35e0f" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.810647 4988 scope.go:117] "RemoveContainer" containerID="6aa743aad47741f4ca839d4ce6c4409df16feb5b7b8732e1d3076d1f41fb7a50" Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.837364 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:17:43 crc kubenswrapper[4988]: I1123 08:17:43.850906 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5758f7685-d7fl9"] Nov 23 08:17:44 crc kubenswrapper[4988]: I1123 08:17:44.509374 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7a1444-bb69-441e-a48a-37d458140666" path="/var/lib/kubelet/pods/cb7a1444-bb69-441e-a48a-37d458140666/volumes" Nov 23 08:17:48 crc kubenswrapper[4988]: I1123 08:17:48.936494 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.195054 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:06 crc kubenswrapper[4988]: E1123 08:18:06.195868 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="dnsmasq-dns" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.195886 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="dnsmasq-dns" Nov 23 08:18:06 crc kubenswrapper[4988]: E1123 08:18:06.195902 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="init" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.195908 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="init" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.196072 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7a1444-bb69-441e-a48a-37d458140666" containerName="dnsmasq-dns" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.196922 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.198945 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.210989 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.363153 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.363201 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2zh\" (UniqueName: \"kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.363412 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.363697 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.363776 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.364033 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465433 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2zh\" (UniqueName: \"kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465516 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465590 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465614 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465718 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.465805 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.466584 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.471935 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.472260 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.473449 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.473736 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.490068 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2zh\" (UniqueName: \"kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh\") pod \"cinder-scheduler-0\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:06 crc kubenswrapper[4988]: I1123 08:18:06.516114 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.032522 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.038625 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.055291 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerStarted","Data":"8ef9127e2b0e94d133c728b07a065db99e7e7bf25ce85fcab9f73d34024a6288"} Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.666125 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.667670 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api-log" containerID="cri-o://35405be6c4073bd44b16b578292566aa28007bac3a0c7d5864816e1c4cb792e3" gracePeriod=30 Nov 23 08:18:07 crc kubenswrapper[4988]: I1123 08:18:07.667864 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api" containerID="cri-o://c40a2812f074cdacc0f74f9ec5faab5976b3857f4adc9416b758712fe60a1f21" gracePeriod=30 Nov 23 08:18:08 crc kubenswrapper[4988]: I1123 08:18:08.067246 4988 generic.go:334] "Generic (PLEG): container finished" podID="83158b54-f752-4658-b031-b5b79b880d93" containerID="35405be6c4073bd44b16b578292566aa28007bac3a0c7d5864816e1c4cb792e3" exitCode=143 Nov 23 08:18:08 crc kubenswrapper[4988]: I1123 08:18:08.067338 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerDied","Data":"35405be6c4073bd44b16b578292566aa28007bac3a0c7d5864816e1c4cb792e3"} Nov 23 08:18:08 crc kubenswrapper[4988]: I1123 08:18:08.069137 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerStarted","Data":"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac"} Nov 23 08:18:09 crc kubenswrapper[4988]: I1123 08:18:09.078173 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerStarted","Data":"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e"} Nov 23 08:18:09 crc kubenswrapper[4988]: I1123 08:18:09.108723 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.793717824 podStartE2EDuration="3.108702349s" podCreationTimestamp="2025-11-23 08:18:06 +0000 UTC" firstStartedPulling="2025-11-23 08:18:07.03814107 +0000 UTC m=+5539.346653873" lastFinishedPulling="2025-11-23 08:18:07.353125615 +0000 UTC m=+5539.661638398" observedRunningTime="2025-11-23 08:18:09.100362855 +0000 UTC m=+5541.408875628" watchObservedRunningTime="2025-11-23 08:18:09.108702349 +0000 UTC m=+5541.417215122" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.117175 4988 generic.go:334] "Generic (PLEG): container finished" podID="83158b54-f752-4658-b031-b5b79b880d93" containerID="c40a2812f074cdacc0f74f9ec5faab5976b3857f4adc9416b758712fe60a1f21" exitCode=0 Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.117265 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerDied","Data":"c40a2812f074cdacc0f74f9ec5faab5976b3857f4adc9416b758712fe60a1f21"} Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.307249 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366107 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft828\" (UniqueName: \"kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366288 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366332 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366390 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366445 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366473 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366522 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366574 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.366649 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs\") pod \"83158b54-f752-4658-b031-b5b79b880d93\" (UID: \"83158b54-f752-4658-b031-b5b79b880d93\") " Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.367847 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs" (OuterVolumeSpecName: "logs") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.367950 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.380656 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts" (OuterVolumeSpecName: "scripts") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.380717 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.380803 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828" (OuterVolumeSpecName: "kube-api-access-ft828") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "kube-api-access-ft828". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.428474 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.447323 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.463373 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.465303 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data" (OuterVolumeSpecName: "config-data") pod "83158b54-f752-4658-b031-b5b79b880d93" (UID: "83158b54-f752-4658-b031-b5b79b880d93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469048 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft828\" (UniqueName: \"kubernetes.io/projected/83158b54-f752-4658-b031-b5b79b880d93-kube-api-access-ft828\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469100 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83158b54-f752-4658-b031-b5b79b880d93-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469117 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469130 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469145 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469158 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469170 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469181 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83158b54-f752-4658-b031-b5b79b880d93-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.469196 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83158b54-f752-4658-b031-b5b79b880d93-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:11 crc kubenswrapper[4988]: I1123 08:18:11.517134 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.131163 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"83158b54-f752-4658-b031-b5b79b880d93","Type":"ContainerDied","Data":"2c330ff5915581590e93fda61e577b464b2c72e793b23d8d32a2a7c16d379ebb"} Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.131516 4988 scope.go:117] "RemoveContainer" containerID="c40a2812f074cdacc0f74f9ec5faab5976b3857f4adc9416b758712fe60a1f21" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.131230 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.165380 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.172563 4988 scope.go:117] "RemoveContainer" containerID="35405be6c4073bd44b16b578292566aa28007bac3a0c7d5864816e1c4cb792e3" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.181254 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.193620 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:12 crc kubenswrapper[4988]: E1123 08:18:12.194044 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api-log" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.194066 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api-log" Nov 23 08:18:12 crc kubenswrapper[4988]: E1123 08:18:12.194094 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.194103 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.194376 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.194410 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="83158b54-f752-4658-b031-b5b79b880d93" containerName="cinder-api-log" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.195575 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.198320 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.198547 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.198792 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.206356 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.285782 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data-custom\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.285884 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqfq\" (UniqueName: \"kubernetes.io/projected/3471cd03-a8d4-4923-a881-26121e7ceaef-kube-api-access-kkqfq\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.285918 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3471cd03-a8d4-4923-a881-26121e7ceaef-logs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.285990 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-scripts\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.286017 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.286064 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3471cd03-a8d4-4923-a881-26121e7ceaef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.286115 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.286181 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.286349 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.388981 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-scripts\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389068 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389149 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3471cd03-a8d4-4923-a881-26121e7ceaef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389184 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389285 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3471cd03-a8d4-4923-a881-26121e7ceaef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389307 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389424 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389490 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data-custom\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389638 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqfq\" (UniqueName: \"kubernetes.io/projected/3471cd03-a8d4-4923-a881-26121e7ceaef-kube-api-access-kkqfq\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.389676 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3471cd03-a8d4-4923-a881-26121e7ceaef-logs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.390116 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3471cd03-a8d4-4923-a881-26121e7ceaef-logs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.394843 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.395428 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.395466 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-scripts\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.395593 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data-custom\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.399445 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.406959 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3471cd03-a8d4-4923-a881-26121e7ceaef-config-data\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.407151 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqfq\" (UniqueName: \"kubernetes.io/projected/3471cd03-a8d4-4923-a881-26121e7ceaef-kube-api-access-kkqfq\") pod \"cinder-api-0\" (UID: \"3471cd03-a8d4-4923-a881-26121e7ceaef\") " pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.516264 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83158b54-f752-4658-b031-b5b79b880d93" path="/var/lib/kubelet/pods/83158b54-f752-4658-b031-b5b79b880d93/volumes" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.529416 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:18:12 crc kubenswrapper[4988]: I1123 08:18:12.992175 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:18:12 crc kubenswrapper[4988]: W1123 08:18:12.998371 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3471cd03_a8d4_4923_a881_26121e7ceaef.slice/crio-41e19cd6e85ae2fc51147349c087a56fb5f3a952eda0f12e68c52f759326e963 WatchSource:0}: Error finding container 41e19cd6e85ae2fc51147349c087a56fb5f3a952eda0f12e68c52f759326e963: Status 404 returned error can't find the container with id 41e19cd6e85ae2fc51147349c087a56fb5f3a952eda0f12e68c52f759326e963 Nov 23 08:18:13 crc kubenswrapper[4988]: I1123 08:18:13.142767 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3471cd03-a8d4-4923-a881-26121e7ceaef","Type":"ContainerStarted","Data":"41e19cd6e85ae2fc51147349c087a56fb5f3a952eda0f12e68c52f759326e963"} Nov 23 08:18:14 crc kubenswrapper[4988]: I1123 08:18:14.154349 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3471cd03-a8d4-4923-a881-26121e7ceaef","Type":"ContainerStarted","Data":"d2a61b11be22ef66f87c2418775df3d22e0beb69b700da327e8a1f959feee01e"} Nov 23 08:18:15 crc kubenswrapper[4988]: I1123 08:18:15.164765 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3471cd03-a8d4-4923-a881-26121e7ceaef","Type":"ContainerStarted","Data":"04932e1dee8537b4733a0bd9e538dac5ead8f4a8fa64b93f074fe6911256ebd1"} Nov 23 08:18:15 crc kubenswrapper[4988]: I1123 08:18:15.164892 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 08:18:15 crc kubenswrapper[4988]: I1123 08:18:15.189201 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.189181982 podStartE2EDuration="3.189181982s" podCreationTimestamp="2025-11-23 08:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:18:15.186482206 +0000 UTC m=+5547.494994989" watchObservedRunningTime="2025-11-23 08:18:15.189181982 +0000 UTC m=+5547.497694745" Nov 23 08:18:16 crc kubenswrapper[4988]: I1123 08:18:16.785634 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 08:18:16 crc kubenswrapper[4988]: I1123 08:18:16.876268 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:17 crc kubenswrapper[4988]: I1123 08:18:17.186459 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="cinder-scheduler" containerID="cri-o://058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac" gracePeriod=30 Nov 23 08:18:17 crc kubenswrapper[4988]: I1123 08:18:17.186943 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="probe" containerID="cri-o://f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e" gracePeriod=30 Nov 23 08:18:18 crc kubenswrapper[4988]: I1123 08:18:18.201476 4988 generic.go:334] "Generic (PLEG): container finished" podID="ef133739-82c8-45a1-83b3-6270c70813c3" containerID="f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e" exitCode=0 Nov 23 08:18:18 crc kubenswrapper[4988]: I1123 08:18:18.201590 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerDied","Data":"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e"} Nov 23 08:18:18 crc kubenswrapper[4988]: I1123 08:18:18.950978 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.016949 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017030 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2zh\" (UniqueName: \"kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017066 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017126 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017218 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017435 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id\") pod \"ef133739-82c8-45a1-83b3-6270c70813c3\" (UID: \"ef133739-82c8-45a1-83b3-6270c70813c3\") " Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.017902 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.026358 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh" (OuterVolumeSpecName: "kube-api-access-vt2zh") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "kube-api-access-vt2zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.027479 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts" (OuterVolumeSpecName: "scripts") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.028189 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.092066 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.120035 4988 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ef133739-82c8-45a1-83b3-6270c70813c3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.120291 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.120307 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2zh\" (UniqueName: \"kubernetes.io/projected/ef133739-82c8-45a1-83b3-6270c70813c3-kube-api-access-vt2zh\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.120322 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.120335 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.139115 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data" (OuterVolumeSpecName: "config-data") pod "ef133739-82c8-45a1-83b3-6270c70813c3" (UID: "ef133739-82c8-45a1-83b3-6270c70813c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.220324 4988 generic.go:334] "Generic (PLEG): container finished" podID="ef133739-82c8-45a1-83b3-6270c70813c3" containerID="058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac" exitCode=0 Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.220388 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerDied","Data":"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac"} Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.220447 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ef133739-82c8-45a1-83b3-6270c70813c3","Type":"ContainerDied","Data":"8ef9127e2b0e94d133c728b07a065db99e7e7bf25ce85fcab9f73d34024a6288"} Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.220473 4988 scope.go:117] "RemoveContainer" containerID="f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.220482 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.222315 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef133739-82c8-45a1-83b3-6270c70813c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.257686 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.264622 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.287557 4988 scope.go:117] "RemoveContainer" containerID="058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.295615 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:19 crc kubenswrapper[4988]: E1123 08:18:19.296332 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="probe" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.296351 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="probe" Nov 23 08:18:19 crc kubenswrapper[4988]: E1123 08:18:19.296366 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="cinder-scheduler" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.296374 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="cinder-scheduler" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.296622 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="probe" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.296638 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" containerName="cinder-scheduler" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.298080 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.300694 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.308672 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.328573 4988 scope.go:117] "RemoveContainer" containerID="f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e" Nov 23 08:18:19 crc kubenswrapper[4988]: E1123 08:18:19.329949 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e\": container with ID starting with f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e not found: ID does not exist" containerID="f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.329989 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e"} err="failed to get container status \"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e\": rpc error: code = NotFound desc = could not find container \"f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e\": container with ID starting with f924f2ed821981494ddb846e55558118d2b8d9ea549fc27e2ea7680d0c6e368e not found: ID does not exist" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.330019 4988 scope.go:117] "RemoveContainer" containerID="058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac" Nov 23 08:18:19 crc kubenswrapper[4988]: E1123 08:18:19.331310 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac\": container with ID starting with 058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac not found: ID does not exist" containerID="058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.331347 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac"} err="failed to get container status \"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac\": rpc error: code = NotFound desc = could not find container \"058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac\": container with ID starting with 058ace37b8d82cd78ae60504c693ae903ce6272a6a143040bd4dd8552fbcf5ac not found: ID does not exist" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431015 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-scripts\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431109 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431166 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431211 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjpg\" (UniqueName: \"kubernetes.io/projected/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-kube-api-access-2rjpg\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.431232 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533383 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533593 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533689 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjpg\" (UniqueName: \"kubernetes.io/projected/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-kube-api-access-2rjpg\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533748 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533900 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-scripts\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.533962 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.534109 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.538833 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-scripts\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.539166 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.539385 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.542101 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.554435 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjpg\" (UniqueName: \"kubernetes.io/projected/d52991dd-0f33-4543-bbaf-5abe1ee31cbc-kube-api-access-2rjpg\") pod \"cinder-scheduler-0\" (UID: \"d52991dd-0f33-4543-bbaf-5abe1ee31cbc\") " pod="openstack/cinder-scheduler-0" Nov 23 08:18:19 crc kubenswrapper[4988]: I1123 08:18:19.617512 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:18:20 crc kubenswrapper[4988]: I1123 08:18:20.115154 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:18:20 crc kubenswrapper[4988]: I1123 08:18:20.230304 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d52991dd-0f33-4543-bbaf-5abe1ee31cbc","Type":"ContainerStarted","Data":"21e8441fdf31fad759a2190e38f3a5fd170b095407a8fd3b2619f49bfaf16501"} Nov 23 08:18:20 crc kubenswrapper[4988]: I1123 08:18:20.505944 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef133739-82c8-45a1-83b3-6270c70813c3" path="/var/lib/kubelet/pods/ef133739-82c8-45a1-83b3-6270c70813c3/volumes" Nov 23 08:18:21 crc kubenswrapper[4988]: I1123 08:18:21.242493 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d52991dd-0f33-4543-bbaf-5abe1ee31cbc","Type":"ContainerStarted","Data":"0abe664fb969fd3837a70a515021dfc07273fa951d1629ecdc60318dbf480b7d"} Nov 23 08:18:21 crc kubenswrapper[4988]: I1123 08:18:21.242941 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d52991dd-0f33-4543-bbaf-5abe1ee31cbc","Type":"ContainerStarted","Data":"d8e74cf13f9da4b9d014c4c71ed1f41ef4c76ea6089213a886634865dffe1092"} Nov 23 08:18:21 crc kubenswrapper[4988]: I1123 08:18:21.274551 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.274531283 podStartE2EDuration="2.274531283s" podCreationTimestamp="2025-11-23 08:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:18:21.271809287 +0000 UTC m=+5553.580322100" watchObservedRunningTime="2025-11-23 08:18:21.274531283 +0000 UTC m=+5553.583044066" Nov 23 08:18:24 crc kubenswrapper[4988]: I1123 08:18:24.345539 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 08:18:24 crc kubenswrapper[4988]: I1123 08:18:24.618077 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 08:18:29 crc kubenswrapper[4988]: I1123 08:18:29.848501 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 08:18:32 crc kubenswrapper[4988]: I1123 08:18:32.955703 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p7456"] Nov 23 08:18:32 crc kubenswrapper[4988]: I1123 08:18:32.958487 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7456" Nov 23 08:18:32 crc kubenswrapper[4988]: I1123 08:18:32.970124 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p7456"] Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.013032 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.013317 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfwb\" (UniqueName: \"kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.052389 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7164-account-create-4hnhx"] Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.054147 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.056569 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.066277 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7164-account-create-4hnhx"] Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.114784 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b8cf\" (UniqueName: \"kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.114861 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfwb\" (UniqueName: \"kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.114950 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.115014 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.115871 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.138715 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfwb\" (UniqueName: \"kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb\") pod \"glance-db-create-p7456\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.216707 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b8cf\" (UniqueName: \"kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.216820 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.217472 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.237493 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b8cf\" (UniqueName: \"kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf\") pod \"glance-7164-account-create-4hnhx\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.292858 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7456" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.375721 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.770999 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7164-account-create-4hnhx"] Nov 23 08:18:33 crc kubenswrapper[4988]: W1123 08:18:33.774944 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ec87dde_30c6_42d0_b3ec_aa61bc52a9a4.slice/crio-8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd WatchSource:0}: Error finding container 8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd: Status 404 returned error can't find the container with id 8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd Nov 23 08:18:33 crc kubenswrapper[4988]: I1123 08:18:33.870113 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p7456"] Nov 23 08:18:33 crc kubenswrapper[4988]: W1123 08:18:33.890774 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ce86e7_aef9_4c27_9b85_1705b1ef877d.slice/crio-bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165 WatchSource:0}: Error finding container bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165: Status 404 returned error can't find the container with id bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165 Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.412780 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7164-account-create-4hnhx" event={"ID":"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4","Type":"ContainerStarted","Data":"91c3804eaa3ea79ba9fcb4241239a1762f5e9853007aaa498ea565ac860a3634"} Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.412817 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7164-account-create-4hnhx" event={"ID":"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4","Type":"ContainerStarted","Data":"8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd"} Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.414415 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7456" event={"ID":"78ce86e7-aef9-4c27-9b85-1705b1ef877d","Type":"ContainerStarted","Data":"26a1fc8cf72e69eade4d0d162c378138f3dc6f9eb58e29a617ee5446a8f33839"} Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.414448 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7456" event={"ID":"78ce86e7-aef9-4c27-9b85-1705b1ef877d","Type":"ContainerStarted","Data":"bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165"} Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.450296 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7164-account-create-4hnhx" podStartSLOduration=1.450271768 podStartE2EDuration="1.450271768s" podCreationTimestamp="2025-11-23 08:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:18:34.425949125 +0000 UTC m=+5566.734461888" watchObservedRunningTime="2025-11-23 08:18:34.450271768 +0000 UTC m=+5566.758784551" Nov 23 08:18:34 crc kubenswrapper[4988]: I1123 08:18:34.458250 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-p7456" podStartSLOduration=2.458232893 podStartE2EDuration="2.458232893s" podCreationTimestamp="2025-11-23 08:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:18:34.446030345 +0000 UTC m=+5566.754543148" watchObservedRunningTime="2025-11-23 08:18:34.458232893 +0000 UTC m=+5566.766745656" Nov 23 08:18:35 crc kubenswrapper[4988]: I1123 08:18:35.426123 4988 generic.go:334] "Generic (PLEG): container finished" podID="6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" containerID="91c3804eaa3ea79ba9fcb4241239a1762f5e9853007aaa498ea565ac860a3634" exitCode=0 Nov 23 08:18:35 crc kubenswrapper[4988]: I1123 08:18:35.426462 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7164-account-create-4hnhx" event={"ID":"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4","Type":"ContainerDied","Data":"91c3804eaa3ea79ba9fcb4241239a1762f5e9853007aaa498ea565ac860a3634"} Nov 23 08:18:35 crc kubenswrapper[4988]: I1123 08:18:35.429211 4988 generic.go:334] "Generic (PLEG): container finished" podID="78ce86e7-aef9-4c27-9b85-1705b1ef877d" containerID="26a1fc8cf72e69eade4d0d162c378138f3dc6f9eb58e29a617ee5446a8f33839" exitCode=0 Nov 23 08:18:35 crc kubenswrapper[4988]: I1123 08:18:35.429256 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7456" event={"ID":"78ce86e7-aef9-4c27-9b85-1705b1ef877d","Type":"ContainerDied","Data":"26a1fc8cf72e69eade4d0d162c378138f3dc6f9eb58e29a617ee5446a8f33839"} Nov 23 08:18:36 crc kubenswrapper[4988]: I1123 08:18:36.908958 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7456" Nov 23 08:18:36 crc kubenswrapper[4988]: I1123 08:18:36.915882 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.001702 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts\") pod \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.001908 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfwb\" (UniqueName: \"kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb\") pod \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\" (UID: \"78ce86e7-aef9-4c27-9b85-1705b1ef877d\") " Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.001951 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b8cf\" (UniqueName: \"kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf\") pod \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.001990 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts\") pod \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\" (UID: \"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4\") " Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.002445 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78ce86e7-aef9-4c27-9b85-1705b1ef877d" (UID: "78ce86e7-aef9-4c27-9b85-1705b1ef877d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.002648 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" (UID: "6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.002866 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78ce86e7-aef9-4c27-9b85-1705b1ef877d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.002892 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.008979 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb" (OuterVolumeSpecName: "kube-api-access-2kfwb") pod "78ce86e7-aef9-4c27-9b85-1705b1ef877d" (UID: "78ce86e7-aef9-4c27-9b85-1705b1ef877d"). InnerVolumeSpecName "kube-api-access-2kfwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.009133 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf" (OuterVolumeSpecName: "kube-api-access-6b8cf") pod "6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" (UID: "6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4"). InnerVolumeSpecName "kube-api-access-6b8cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.106664 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfwb\" (UniqueName: \"kubernetes.io/projected/78ce86e7-aef9-4c27-9b85-1705b1ef877d-kube-api-access-2kfwb\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.106753 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b8cf\" (UniqueName: \"kubernetes.io/projected/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4-kube-api-access-6b8cf\") on node \"crc\" DevicePath \"\"" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.466721 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7164-account-create-4hnhx" event={"ID":"6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4","Type":"ContainerDied","Data":"8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd"} Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.466899 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5e2529c41add2ec99c19be4b922c2b16ae14a39e0f89f197ae2d61b7eef2bd" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.467104 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7164-account-create-4hnhx" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.477380 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p7456" event={"ID":"78ce86e7-aef9-4c27-9b85-1705b1ef877d","Type":"ContainerDied","Data":"bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165"} Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.477446 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdad8b07536c9eef22e39f21bbb86de680ea83100207e7e6437b638959d04165" Nov 23 08:18:37 crc kubenswrapper[4988]: I1123 08:18:37.477532 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p7456" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.226795 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-k9qj7"] Nov 23 08:18:38 crc kubenswrapper[4988]: E1123 08:18:38.227977 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ce86e7-aef9-4c27-9b85-1705b1ef877d" containerName="mariadb-database-create" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.228008 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ce86e7-aef9-4c27-9b85-1705b1ef877d" containerName="mariadb-database-create" Nov 23 08:18:38 crc kubenswrapper[4988]: E1123 08:18:38.228046 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" containerName="mariadb-account-create" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.228059 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" containerName="mariadb-account-create" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.228749 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ce86e7-aef9-4c27-9b85-1705b1ef877d" containerName="mariadb-database-create" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.228800 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" containerName="mariadb-account-create" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.230030 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.252347 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-k9qj7"] Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.270662 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcqrh" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.270735 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.358415 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.358827 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjrnc\" (UniqueName: \"kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.358847 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.358894 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.460045 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.460154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjrnc\" (UniqueName: \"kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.460186 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.460270 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.464712 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.464762 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.466344 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.482486 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjrnc\" (UniqueName: \"kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc\") pod \"glance-db-sync-k9qj7\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:38 crc kubenswrapper[4988]: I1123 08:18:38.603650 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k9qj7" Nov 23 08:18:39 crc kubenswrapper[4988]: I1123 08:18:39.117021 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-k9qj7"] Nov 23 08:18:39 crc kubenswrapper[4988]: W1123 08:18:39.121729 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775678bf_5721_4f4b_ac7d_f4ff4982a94d.slice/crio-c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc WatchSource:0}: Error finding container c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc: Status 404 returned error can't find the container with id c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc Nov 23 08:18:39 crc kubenswrapper[4988]: I1123 08:18:39.499699 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k9qj7" event={"ID":"775678bf-5721-4f4b-ac7d-f4ff4982a94d","Type":"ContainerStarted","Data":"c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc"} Nov 23 08:18:51 crc kubenswrapper[4988]: I1123 08:18:51.672660 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:18:51 crc kubenswrapper[4988]: I1123 08:18:51.673508 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:57 crc kubenswrapper[4988]: I1123 08:18:57.689095 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k9qj7" event={"ID":"775678bf-5721-4f4b-ac7d-f4ff4982a94d","Type":"ContainerStarted","Data":"2d15b4cf05610d2b6be1f64a054d2ae820784520e82af142b28122c45c28c00a"} Nov 23 08:18:57 crc kubenswrapper[4988]: I1123 08:18:57.718359 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-k9qj7" podStartSLOduration=2.7491524099999998 podStartE2EDuration="19.718330959s" podCreationTimestamp="2025-11-23 08:18:38 +0000 UTC" firstStartedPulling="2025-11-23 08:18:39.123420654 +0000 UTC m=+5571.431933417" lastFinishedPulling="2025-11-23 08:18:56.092599193 +0000 UTC m=+5588.401111966" observedRunningTime="2025-11-23 08:18:57.716843383 +0000 UTC m=+5590.025356176" watchObservedRunningTime="2025-11-23 08:18:57.718330959 +0000 UTC m=+5590.026843752" Nov 23 08:18:58 crc kubenswrapper[4988]: I1123 08:18:58.085655 4988 scope.go:117] "RemoveContainer" containerID="95cad2a2d9c87dc6bb41fd491f8bfa6d9cc609fd42994e1efed96272c4fa1751" Nov 23 08:19:01 crc kubenswrapper[4988]: I1123 08:19:01.736838 4988 generic.go:334] "Generic (PLEG): container finished" podID="775678bf-5721-4f4b-ac7d-f4ff4982a94d" containerID="2d15b4cf05610d2b6be1f64a054d2ae820784520e82af142b28122c45c28c00a" exitCode=0 Nov 23 08:19:01 crc kubenswrapper[4988]: I1123 08:19:01.736908 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k9qj7" event={"ID":"775678bf-5721-4f4b-ac7d-f4ff4982a94d","Type":"ContainerDied","Data":"2d15b4cf05610d2b6be1f64a054d2ae820784520e82af142b28122c45c28c00a"} Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.235189 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k9qj7" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.259890 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjrnc\" (UniqueName: \"kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc\") pod \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.262078 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data\") pod \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.262542 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data\") pod \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.262717 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle\") pod \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\" (UID: \"775678bf-5721-4f4b-ac7d-f4ff4982a94d\") " Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.268302 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "775678bf-5721-4f4b-ac7d-f4ff4982a94d" (UID: "775678bf-5721-4f4b-ac7d-f4ff4982a94d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.279795 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc" (OuterVolumeSpecName: "kube-api-access-xjrnc") pod "775678bf-5721-4f4b-ac7d-f4ff4982a94d" (UID: "775678bf-5721-4f4b-ac7d-f4ff4982a94d"). InnerVolumeSpecName "kube-api-access-xjrnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.325071 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data" (OuterVolumeSpecName: "config-data") pod "775678bf-5721-4f4b-ac7d-f4ff4982a94d" (UID: "775678bf-5721-4f4b-ac7d-f4ff4982a94d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.328884 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "775678bf-5721-4f4b-ac7d-f4ff4982a94d" (UID: "775678bf-5721-4f4b-ac7d-f4ff4982a94d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.366002 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.366044 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.366061 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjrnc\" (UniqueName: \"kubernetes.io/projected/775678bf-5721-4f4b-ac7d-f4ff4982a94d-kube-api-access-xjrnc\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.366074 4988 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/775678bf-5721-4f4b-ac7d-f4ff4982a94d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.757519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-k9qj7" event={"ID":"775678bf-5721-4f4b-ac7d-f4ff4982a94d","Type":"ContainerDied","Data":"c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc"} Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.757589 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c207e5a044ccbe5feab4bd68ef181fe5071fd798bb4f24ca1e5d3c198f7894cc" Nov 23 08:19:03 crc kubenswrapper[4988]: I1123 08:19:03.757554 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-k9qj7" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.186092 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:04 crc kubenswrapper[4988]: E1123 08:19:04.186654 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775678bf-5721-4f4b-ac7d-f4ff4982a94d" containerName="glance-db-sync" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.186670 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="775678bf-5721-4f4b-ac7d-f4ff4982a94d" containerName="glance-db-sync" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.186844 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="775678bf-5721-4f4b-ac7d-f4ff4982a94d" containerName="glance-db-sync" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.187711 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.211638 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.224480 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.226018 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.229334 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.229578 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.233262 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcqrh" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.239794 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.282903 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.282959 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283015 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283063 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283082 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283118 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4989\" (UniqueName: \"kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283137 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283172 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283285 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dpt\" (UniqueName: \"kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.283349 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.288369 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.290552 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.293405 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.298762 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385171 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385269 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385332 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385368 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385410 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385445 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7g2\" (UniqueName: \"kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385472 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385490 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385513 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385534 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385555 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385581 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385604 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4989\" (UniqueName: \"kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385621 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385654 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.385680 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4dpt\" (UniqueName: \"kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.386588 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.387313 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.387799 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.387713 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.387944 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.390531 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.392992 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.393947 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.401325 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4989\" (UniqueName: \"kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.401548 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.402825 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4dpt\" (UniqueName: \"kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt\") pod \"dnsmasq-dns-7798c5777-4qdlx\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486680 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486778 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486876 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486907 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.486937 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7g2\" (UniqueName: \"kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.487651 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.487874 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.491854 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.492873 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.511861 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.512147 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.515118 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7g2\" (UniqueName: \"kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2\") pod \"glance-default-internal-api-0\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.545490 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:04 crc kubenswrapper[4988]: I1123 08:19:04.614982 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.011224 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:05 crc kubenswrapper[4988]: W1123 08:19:05.015065 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff WatchSource:0}: Error finding container 961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff: Status 404 returned error can't find the container with id 961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.137882 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.188040 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.280227 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.791974 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerStarted","Data":"56f7bdcf776523fc021510724fa7129e61289237636d0d4f675b35b9dd5222be"} Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.797252 4988 generic.go:334] "Generic (PLEG): container finished" podID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerID="b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64" exitCode=0 Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.797347 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" event={"ID":"068790ce-d14c-4234-b1ec-82369d7eae6d","Type":"ContainerDied","Data":"b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64"} Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.797375 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" event={"ID":"068790ce-d14c-4234-b1ec-82369d7eae6d","Type":"ContainerStarted","Data":"961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff"} Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.799681 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerStarted","Data":"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9"} Nov 23 08:19:05 crc kubenswrapper[4988]: I1123 08:19:05.799725 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerStarted","Data":"a6c8940cf470ca9334a7b4782591512aec85ecc2d3dfa6a250401ac828103bb7"} Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.642753 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.809865 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" event={"ID":"068790ce-d14c-4234-b1ec-82369d7eae6d","Type":"ContainerStarted","Data":"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b"} Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.810207 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.812246 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerStarted","Data":"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c"} Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.812357 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-log" containerID="cri-o://3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" gracePeriod=30 Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.812454 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-httpd" containerID="cri-o://fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" gracePeriod=30 Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.816290 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerStarted","Data":"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2"} Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.816320 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerStarted","Data":"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d"} Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.816605 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-log" containerID="cri-o://be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" gracePeriod=30 Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.816626 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-httpd" containerID="cri-o://72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" gracePeriod=30 Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.832806 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" podStartSLOduration=2.832787846 podStartE2EDuration="2.832787846s" podCreationTimestamp="2025-11-23 08:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:06.829797193 +0000 UTC m=+5599.138309956" watchObservedRunningTime="2025-11-23 08:19:06.832787846 +0000 UTC m=+5599.141300609" Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.855318 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.855303045 podStartE2EDuration="2.855303045s" podCreationTimestamp="2025-11-23 08:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:06.852738062 +0000 UTC m=+5599.161250825" watchObservedRunningTime="2025-11-23 08:19:06.855303045 +0000 UTC m=+5599.163815808" Nov 23 08:19:06 crc kubenswrapper[4988]: I1123 08:19:06.874822 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.874803681 podStartE2EDuration="2.874803681s" podCreationTimestamp="2025-11-23 08:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:06.874040842 +0000 UTC m=+5599.182553605" watchObservedRunningTime="2025-11-23 08:19:06.874803681 +0000 UTC m=+5599.183316444" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.375027 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.413648 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.469400 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4989\" (UniqueName: \"kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.469966 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470586 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470618 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470638 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470658 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470691 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470737 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470754 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470780 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470812 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run\") pod \"4bf53aa6-a9d2-4b49-84af-342254332c45\" (UID: \"4bf53aa6-a9d2-4b49-84af-342254332c45\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.470947 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz7g2\" (UniqueName: \"kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2\") pod \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\" (UID: \"96b363ba-fe99-4f59-a5b3-35e9764c1dd8\") " Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.471000 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs" (OuterVolumeSpecName: "logs") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.472450 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.473372 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs" (OuterVolumeSpecName: "logs") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.473372 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.474451 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.475468 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts" (OuterVolumeSpecName: "scripts") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.476832 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989" (OuterVolumeSpecName: "kube-api-access-b4989") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "kube-api-access-b4989". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.480637 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2" (OuterVolumeSpecName: "kube-api-access-mz7g2") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "kube-api-access-mz7g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.488492 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts" (OuterVolumeSpecName: "scripts") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.509456 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.544322 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583895 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583926 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583936 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz7g2\" (UniqueName: \"kubernetes.io/projected/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-kube-api-access-mz7g2\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583947 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4989\" (UniqueName: \"kubernetes.io/projected/4bf53aa6-a9d2-4b49-84af-342254332c45-kube-api-access-b4989\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583957 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583965 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4bf53aa6-a9d2-4b49-84af-342254332c45-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.583973 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.584016 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.584073 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.594608 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data" (OuterVolumeSpecName: "config-data") pod "4bf53aa6-a9d2-4b49-84af-342254332c45" (UID: "4bf53aa6-a9d2-4b49-84af-342254332c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.595827 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data" (OuterVolumeSpecName: "config-data") pod "96b363ba-fe99-4f59-a5b3-35e9764c1dd8" (UID: "96b363ba-fe99-4f59-a5b3-35e9764c1dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.685641 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf53aa6-a9d2-4b49-84af-342254332c45-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.685679 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b363ba-fe99-4f59-a5b3-35e9764c1dd8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.833973 4988 generic.go:334] "Generic (PLEG): container finished" podID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerID="72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" exitCode=143 Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834013 4988 generic.go:334] "Generic (PLEG): container finished" podID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerID="be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" exitCode=143 Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834055 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerDied","Data":"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834088 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerDied","Data":"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834105 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"96b363ba-fe99-4f59-a5b3-35e9764c1dd8","Type":"ContainerDied","Data":"56f7bdcf776523fc021510724fa7129e61289237636d0d4f675b35b9dd5222be"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834124 4988 scope.go:117] "RemoveContainer" containerID="72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.834259 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836101 4988 generic.go:334] "Generic (PLEG): container finished" podID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerID="fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" exitCode=0 Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836129 4988 generic.go:334] "Generic (PLEG): container finished" podID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerID="3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" exitCode=143 Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836353 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836399 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerDied","Data":"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836423 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerDied","Data":"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.836436 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4bf53aa6-a9d2-4b49-84af-342254332c45","Type":"ContainerDied","Data":"a6c8940cf470ca9334a7b4782591512aec85ecc2d3dfa6a250401ac828103bb7"} Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.868721 4988 scope.go:117] "RemoveContainer" containerID="be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.886541 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.899855 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.912700 4988 scope.go:117] "RemoveContainer" containerID="72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.913942 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2\": container with ID starting with 72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2 not found: ID does not exist" containerID="72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.913981 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2"} err="failed to get container status \"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2\": rpc error: code = NotFound desc = could not find container \"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2\": container with ID starting with 72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2 not found: ID does not exist" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.914005 4988 scope.go:117] "RemoveContainer" containerID="be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.914518 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d\": container with ID starting with be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d not found: ID does not exist" containerID="be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.914547 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d"} err="failed to get container status \"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d\": rpc error: code = NotFound desc = could not find container \"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d\": container with ID starting with be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d not found: ID does not exist" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.914567 4988 scope.go:117] "RemoveContainer" containerID="72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.914786 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2"} err="failed to get container status \"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2\": rpc error: code = NotFound desc = could not find container \"72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2\": container with ID starting with 72817be416117db0eb7c101d7071554d370e815d1446470a581f12feb5e24fe2 not found: ID does not exist" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.914840 4988 scope.go:117] "RemoveContainer" containerID="be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.915080 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d"} err="failed to get container status \"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d\": rpc error: code = NotFound desc = could not find container \"be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d\": container with ID starting with be2e48ef4b8430b7afd80d93b883c6d1ce91d6e72728dd0a9be7f1c1f2d7589d not found: ID does not exist" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.915104 4988 scope.go:117] "RemoveContainer" containerID="fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.916329 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.926280 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.936772 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.937151 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937166 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.937177 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937185 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.937260 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937269 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: E1123 08:19:07.937285 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937290 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937475 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937489 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937498 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-log" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.937510 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" containerName="glance-httpd" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.938435 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.941347 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.941573 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcqrh" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.941730 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.941842 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.947220 4988 scope.go:117] "RemoveContainer" containerID="3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.947336 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.948762 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.957337 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.957552 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 08:19:07 crc kubenswrapper[4988]: I1123 08:19:07.964183 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003572 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003614 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003639 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003668 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003689 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003770 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003860 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003933 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.003972 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnt7\" (UniqueName: \"kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.004000 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.004027 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8cq\" (UniqueName: \"kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.004064 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.004106 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.004492 4988 scope.go:117] "RemoveContainer" containerID="fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" Nov 23 08:19:08 crc kubenswrapper[4988]: E1123 08:19:08.005313 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c\": container with ID starting with fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c not found: ID does not exist" containerID="fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.005350 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c"} err="failed to get container status \"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c\": rpc error: code = NotFound desc = could not find container \"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c\": container with ID starting with fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c not found: ID does not exist" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.005370 4988 scope.go:117] "RemoveContainer" containerID="3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" Nov 23 08:19:08 crc kubenswrapper[4988]: E1123 08:19:08.005612 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9\": container with ID starting with 3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9 not found: ID does not exist" containerID="3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.005632 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9"} err="failed to get container status \"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9\": rpc error: code = NotFound desc = could not find container \"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9\": container with ID starting with 3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9 not found: ID does not exist" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.005648 4988 scope.go:117] "RemoveContainer" containerID="fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.006034 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c"} err="failed to get container status \"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c\": rpc error: code = NotFound desc = could not find container \"fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c\": container with ID starting with fe22a95cd6ddb1d93ff08faa3f38efc7e669d732d770510619a2f6bb96d5233c not found: ID does not exist" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.006048 4988 scope.go:117] "RemoveContainer" containerID="3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.006113 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.006341 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9"} err="failed to get container status \"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9\": rpc error: code = NotFound desc = could not find container \"3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9\": container with ID starting with 3236e82e22ff03315306df754cca409b4ee3f07a35163c0f03f5d11705d7fad9 not found: ID does not exist" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105485 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105824 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105850 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105868 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105888 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105924 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105945 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105964 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.105995 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mnt7\" (UniqueName: \"kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106034 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106061 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8cq\" (UniqueName: \"kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106083 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106108 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106142 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106697 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.106996 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.108001 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.109935 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.110442 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.110845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.111470 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.111923 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.112868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.112937 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.114047 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.117715 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.122934 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mnt7\" (UniqueName: \"kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7\") pod \"glance-default-external-api-0\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.129600 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8cq\" (UniqueName: \"kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq\") pod \"glance-default-internal-api-0\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.263770 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.279429 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.521264 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf53aa6-a9d2-4b49-84af-342254332c45" path="/var/lib/kubelet/pods/4bf53aa6-a9d2-4b49-84af-342254332c45/volumes" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.523911 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b363ba-fe99-4f59-a5b3-35e9764c1dd8" path="/var/lib/kubelet/pods/96b363ba-fe99-4f59-a5b3-35e9764c1dd8/volumes" Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.835126 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:19:08 crc kubenswrapper[4988]: I1123 08:19:08.847317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerStarted","Data":"e4f7298aa4d62834e6793ea11fdade7b7067562e42204d058cb86f6fcd67d410"} Nov 23 08:19:09 crc kubenswrapper[4988]: I1123 08:19:09.042939 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:19:09 crc kubenswrapper[4988]: I1123 08:19:09.861330 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerStarted","Data":"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9"} Nov 23 08:19:09 crc kubenswrapper[4988]: I1123 08:19:09.861683 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerStarted","Data":"5b494213000b9c558e659e6ee9d809e0705736cf670d3147138c202a1bf13e4f"} Nov 23 08:19:09 crc kubenswrapper[4988]: I1123 08:19:09.863129 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerStarted","Data":"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868"} Nov 23 08:19:10 crc kubenswrapper[4988]: I1123 08:19:10.872311 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerStarted","Data":"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d"} Nov 23 08:19:10 crc kubenswrapper[4988]: I1123 08:19:10.873692 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerStarted","Data":"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1"} Nov 23 08:19:10 crc kubenswrapper[4988]: I1123 08:19:10.898527 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.898506212 podStartE2EDuration="3.898506212s" podCreationTimestamp="2025-11-23 08:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:10.891641235 +0000 UTC m=+5603.200153998" watchObservedRunningTime="2025-11-23 08:19:10.898506212 +0000 UTC m=+5603.207018975" Nov 23 08:19:10 crc kubenswrapper[4988]: I1123 08:19:10.924371 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.924350553 podStartE2EDuration="3.924350553s" podCreationTimestamp="2025-11-23 08:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:10.912247448 +0000 UTC m=+5603.220760211" watchObservedRunningTime="2025-11-23 08:19:10.924350553 +0000 UTC m=+5603.232863316" Nov 23 08:19:14 crc kubenswrapper[4988]: I1123 08:19:14.515398 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:14 crc kubenswrapper[4988]: I1123 08:19:14.589152 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:19:14 crc kubenswrapper[4988]: I1123 08:19:14.589451 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="dnsmasq-dns" containerID="cri-o://2663454b1095eee8ac4bab0b16d1bec867024229e2296a963adcada156f57978" gracePeriod=10 Nov 23 08:19:14 crc kubenswrapper[4988]: I1123 08:19:14.915455 4988 generic.go:334] "Generic (PLEG): container finished" podID="006175e8-8110-46ec-9d88-102e64fc6360" containerID="2663454b1095eee8ac4bab0b16d1bec867024229e2296a963adcada156f57978" exitCode=0 Nov 23 08:19:14 crc kubenswrapper[4988]: I1123 08:19:14.915734 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" event={"ID":"006175e8-8110-46ec-9d88-102e64fc6360","Type":"ContainerDied","Data":"2663454b1095eee8ac4bab0b16d1bec867024229e2296a963adcada156f57978"} Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.074159 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.241308 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrtqj\" (UniqueName: \"kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj\") pod \"006175e8-8110-46ec-9d88-102e64fc6360\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.241452 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc\") pod \"006175e8-8110-46ec-9d88-102e64fc6360\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.241599 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb\") pod \"006175e8-8110-46ec-9d88-102e64fc6360\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.241634 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config\") pod \"006175e8-8110-46ec-9d88-102e64fc6360\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.241722 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb\") pod \"006175e8-8110-46ec-9d88-102e64fc6360\" (UID: \"006175e8-8110-46ec-9d88-102e64fc6360\") " Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.246997 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj" (OuterVolumeSpecName: "kube-api-access-wrtqj") pod "006175e8-8110-46ec-9d88-102e64fc6360" (UID: "006175e8-8110-46ec-9d88-102e64fc6360"). InnerVolumeSpecName "kube-api-access-wrtqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.291605 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config" (OuterVolumeSpecName: "config") pod "006175e8-8110-46ec-9d88-102e64fc6360" (UID: "006175e8-8110-46ec-9d88-102e64fc6360"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.293821 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "006175e8-8110-46ec-9d88-102e64fc6360" (UID: "006175e8-8110-46ec-9d88-102e64fc6360"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.298526 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "006175e8-8110-46ec-9d88-102e64fc6360" (UID: "006175e8-8110-46ec-9d88-102e64fc6360"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.310010 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "006175e8-8110-46ec-9d88-102e64fc6360" (UID: "006175e8-8110-46ec-9d88-102e64fc6360"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.344589 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.344622 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.344632 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.344640 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/006175e8-8110-46ec-9d88-102e64fc6360-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.344649 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrtqj\" (UniqueName: \"kubernetes.io/projected/006175e8-8110-46ec-9d88-102e64fc6360-kube-api-access-wrtqj\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.926615 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" event={"ID":"006175e8-8110-46ec-9d88-102e64fc6360","Type":"ContainerDied","Data":"65bec2d254938823b6e292f1746ca011c13a849829d9831023cdd0d9f85978cb"} Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.926665 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6d5c6489-289lx" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.926669 4988 scope.go:117] "RemoveContainer" containerID="2663454b1095eee8ac4bab0b16d1bec867024229e2296a963adcada156f57978" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.964443 4988 scope.go:117] "RemoveContainer" containerID="1abf6ca90f76b21c6aaef98d7c9937cccf4d800732637b3a09695f8adf25fe19" Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.967675 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:19:15 crc kubenswrapper[4988]: I1123 08:19:15.983171 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6d5c6489-289lx"] Nov 23 08:19:16 crc kubenswrapper[4988]: I1123 08:19:16.510569 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006175e8-8110-46ec-9d88-102e64fc6360" path="/var/lib/kubelet/pods/006175e8-8110-46ec-9d88-102e64fc6360/volumes" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.264109 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.265366 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.281582 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.281632 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.300037 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.320679 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.323593 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.347570 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.967922 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.968369 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.968429 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:19:18 crc kubenswrapper[4988]: I1123 08:19:18.968459 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:20 crc kubenswrapper[4988]: I1123 08:19:20.998002 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:20.999023 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.002740 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.113125 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.113297 4988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.140885 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.671968 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:19:21 crc kubenswrapper[4988]: I1123 08:19:21.672285 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.806010 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6vdlx"] Nov 23 08:19:31 crc kubenswrapper[4988]: E1123 08:19:31.806958 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="init" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.806971 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="init" Nov 23 08:19:31 crc kubenswrapper[4988]: E1123 08:19:31.806985 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="dnsmasq-dns" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.806991 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="dnsmasq-dns" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.807171 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="006175e8-8110-46ec-9d88-102e64fc6360" containerName="dnsmasq-dns" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.807804 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.816284 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a6bc-account-create-vqqr8"] Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.818099 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.820157 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.825938 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6vdlx"] Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.834731 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a6bc-account-create-vqqr8"] Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.957381 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lbg4\" (UniqueName: \"kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.957672 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.957810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrtp\" (UniqueName: \"kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:31 crc kubenswrapper[4988]: I1123 08:19:31.957867 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.059756 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.059815 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrtp\" (UniqueName: \"kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.059863 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.060590 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.060875 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.060966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lbg4\" (UniqueName: \"kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.081968 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lbg4\" (UniqueName: \"kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4\") pod \"placement-a6bc-account-create-vqqr8\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.086435 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrtp\" (UniqueName: \"kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp\") pod \"placement-db-create-6vdlx\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.131487 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.152100 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.626451 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6vdlx"] Nov 23 08:19:32 crc kubenswrapper[4988]: W1123 08:19:32.627950 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod949149d5_0dd5_4658_b0c5_44bca1bbe862.slice/crio-a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b WatchSource:0}: Error finding container a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b: Status 404 returned error can't find the container with id a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b Nov 23 08:19:32 crc kubenswrapper[4988]: I1123 08:19:32.692156 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a6bc-account-create-vqqr8"] Nov 23 08:19:33 crc kubenswrapper[4988]: I1123 08:19:33.106526 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6vdlx" event={"ID":"949149d5-0dd5-4658-b0c5-44bca1bbe862","Type":"ContainerStarted","Data":"77453be31c1407da1ffb642dcd592eb9cc98896d494ea6eeb2a31cfe3b54e252"} Nov 23 08:19:33 crc kubenswrapper[4988]: I1123 08:19:33.106997 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6vdlx" event={"ID":"949149d5-0dd5-4658-b0c5-44bca1bbe862","Type":"ContainerStarted","Data":"a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b"} Nov 23 08:19:33 crc kubenswrapper[4988]: I1123 08:19:33.108464 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6bc-account-create-vqqr8" event={"ID":"1b61d575-20f6-4426-853e-261215c93765","Type":"ContainerStarted","Data":"902f7c60bc9e4554a976ff61fb1680e1b5fab153402f3156b5acb4f9cdb6f9d9"} Nov 23 08:19:33 crc kubenswrapper[4988]: I1123 08:19:33.108483 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6bc-account-create-vqqr8" event={"ID":"1b61d575-20f6-4426-853e-261215c93765","Type":"ContainerStarted","Data":"926b4aeddf324059c59435108ae869dac01f43bd3b3c9488a19975e21d92b2d7"} Nov 23 08:19:33 crc kubenswrapper[4988]: I1123 08:19:33.135305 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-6vdlx" podStartSLOduration=2.135283132 podStartE2EDuration="2.135283132s" podCreationTimestamp="2025-11-23 08:19:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:33.125576955 +0000 UTC m=+5625.434089718" watchObservedRunningTime="2025-11-23 08:19:33.135283132 +0000 UTC m=+5625.443795895" Nov 23 08:19:34 crc kubenswrapper[4988]: I1123 08:19:34.116537 4988 generic.go:334] "Generic (PLEG): container finished" podID="1b61d575-20f6-4426-853e-261215c93765" containerID="902f7c60bc9e4554a976ff61fb1680e1b5fab153402f3156b5acb4f9cdb6f9d9" exitCode=0 Nov 23 08:19:34 crc kubenswrapper[4988]: I1123 08:19:34.116606 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6bc-account-create-vqqr8" event={"ID":"1b61d575-20f6-4426-853e-261215c93765","Type":"ContainerDied","Data":"902f7c60bc9e4554a976ff61fb1680e1b5fab153402f3156b5acb4f9cdb6f9d9"} Nov 23 08:19:34 crc kubenswrapper[4988]: I1123 08:19:34.118631 4988 generic.go:334] "Generic (PLEG): container finished" podID="949149d5-0dd5-4658-b0c5-44bca1bbe862" containerID="77453be31c1407da1ffb642dcd592eb9cc98896d494ea6eeb2a31cfe3b54e252" exitCode=0 Nov 23 08:19:34 crc kubenswrapper[4988]: I1123 08:19:34.118670 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6vdlx" event={"ID":"949149d5-0dd5-4658-b0c5-44bca1bbe862","Type":"ContainerDied","Data":"77453be31c1407da1ffb642dcd592eb9cc98896d494ea6eeb2a31cfe3b54e252"} Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.569904 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.575052 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.627049 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lbg4\" (UniqueName: \"kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4\") pod \"1b61d575-20f6-4426-853e-261215c93765\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.627107 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts\") pod \"1b61d575-20f6-4426-853e-261215c93765\" (UID: \"1b61d575-20f6-4426-853e-261215c93765\") " Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.627162 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmrtp\" (UniqueName: \"kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp\") pod \"949149d5-0dd5-4658-b0c5-44bca1bbe862\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.627245 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts\") pod \"949149d5-0dd5-4658-b0c5-44bca1bbe862\" (UID: \"949149d5-0dd5-4658-b0c5-44bca1bbe862\") " Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.628178 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "949149d5-0dd5-4658-b0c5-44bca1bbe862" (UID: "949149d5-0dd5-4658-b0c5-44bca1bbe862"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.628406 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b61d575-20f6-4426-853e-261215c93765" (UID: "1b61d575-20f6-4426-853e-261215c93765"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.642409 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4" (OuterVolumeSpecName: "kube-api-access-9lbg4") pod "1b61d575-20f6-4426-853e-261215c93765" (UID: "1b61d575-20f6-4426-853e-261215c93765"). InnerVolumeSpecName "kube-api-access-9lbg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.642476 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp" (OuterVolumeSpecName: "kube-api-access-qmrtp") pod "949149d5-0dd5-4658-b0c5-44bca1bbe862" (UID: "949149d5-0dd5-4658-b0c5-44bca1bbe862"). InnerVolumeSpecName "kube-api-access-qmrtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.728652 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lbg4\" (UniqueName: \"kubernetes.io/projected/1b61d575-20f6-4426-853e-261215c93765-kube-api-access-9lbg4\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.728733 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b61d575-20f6-4426-853e-261215c93765-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.728749 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmrtp\" (UniqueName: \"kubernetes.io/projected/949149d5-0dd5-4658-b0c5-44bca1bbe862-kube-api-access-qmrtp\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:35 crc kubenswrapper[4988]: I1123 08:19:35.728763 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949149d5-0dd5-4658-b0c5-44bca1bbe862-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.140953 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a6bc-account-create-vqqr8" event={"ID":"1b61d575-20f6-4426-853e-261215c93765","Type":"ContainerDied","Data":"926b4aeddf324059c59435108ae869dac01f43bd3b3c9488a19975e21d92b2d7"} Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.141000 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="926b4aeddf324059c59435108ae869dac01f43bd3b3c9488a19975e21d92b2d7" Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.141039 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a6bc-account-create-vqqr8" Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.143221 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6vdlx" event={"ID":"949149d5-0dd5-4658-b0c5-44bca1bbe862","Type":"ContainerDied","Data":"a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b"} Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.143322 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1303c74ae9195c326dc62b6abb04c75e0b2af98002c48cdc8d6335a80dd389b" Nov 23 08:19:36 crc kubenswrapper[4988]: I1123 08:19:36.143270 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6vdlx" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.338082 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:19:37 crc kubenswrapper[4988]: E1123 08:19:37.338854 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949149d5-0dd5-4658-b0c5-44bca1bbe862" containerName="mariadb-database-create" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.338870 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="949149d5-0dd5-4658-b0c5-44bca1bbe862" containerName="mariadb-database-create" Nov 23 08:19:37 crc kubenswrapper[4988]: E1123 08:19:37.338897 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b61d575-20f6-4426-853e-261215c93765" containerName="mariadb-account-create" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.338904 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b61d575-20f6-4426-853e-261215c93765" containerName="mariadb-account-create" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.339103 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b61d575-20f6-4426-853e-261215c93765" containerName="mariadb-account-create" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.339121 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="949149d5-0dd5-4658-b0c5-44bca1bbe862" containerName="mariadb-database-create" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.343381 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.368916 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.369043 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.369124 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.369307 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtbw\" (UniqueName: \"kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.369356 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.369538 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.384914 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rdkz6"] Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.386399 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.390698 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.390956 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-qwmps" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.391585 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.419534 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rdkz6"] Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471203 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471257 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471293 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471312 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471333 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jj85\" (UniqueName: \"kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471355 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471408 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtbw\" (UniqueName: \"kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471464 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.471505 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.472339 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.472343 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.472893 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.472954 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.491681 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtbw\" (UniqueName: \"kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw\") pod \"dnsmasq-dns-65495f77b5-pzchj\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.572542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.572621 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jj85\" (UniqueName: \"kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.572706 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.572798 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.572845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.573484 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.576719 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.578719 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.582749 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.593823 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jj85\" (UniqueName: \"kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85\") pod \"placement-db-sync-rdkz6\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.681536 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:37 crc kubenswrapper[4988]: I1123 08:19:37.714740 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:38 crc kubenswrapper[4988]: I1123 08:19:38.161342 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:19:38 crc kubenswrapper[4988]: I1123 08:19:38.256146 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rdkz6"] Nov 23 08:19:39 crc kubenswrapper[4988]: I1123 08:19:39.166275 4988 generic.go:334] "Generic (PLEG): container finished" podID="34928546-bc40-4d88-ba37-a5afad947c49" containerID="992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83" exitCode=0 Nov 23 08:19:39 crc kubenswrapper[4988]: I1123 08:19:39.166732 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" event={"ID":"34928546-bc40-4d88-ba37-a5afad947c49","Type":"ContainerDied","Data":"992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83"} Nov 23 08:19:39 crc kubenswrapper[4988]: I1123 08:19:39.166757 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" event={"ID":"34928546-bc40-4d88-ba37-a5afad947c49","Type":"ContainerStarted","Data":"fbd5ce94a70cae232cc7f950be9774d25eeef555bf1cb0c748bb3e36353cb244"} Nov 23 08:19:39 crc kubenswrapper[4988]: I1123 08:19:39.169759 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdkz6" event={"ID":"180e5406-f841-4339-85e9-115029643be8","Type":"ContainerStarted","Data":"94c272d244327c4515f3cf835b738ce6325c3c6c9a6f9b6b83570d810e8a0537"} Nov 23 08:19:40 crc kubenswrapper[4988]: I1123 08:19:40.184733 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" event={"ID":"34928546-bc40-4d88-ba37-a5afad947c49","Type":"ContainerStarted","Data":"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5"} Nov 23 08:19:40 crc kubenswrapper[4988]: I1123 08:19:40.185245 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:40 crc kubenswrapper[4988]: I1123 08:19:40.208927 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" podStartSLOduration=3.208907776 podStartE2EDuration="3.208907776s" podCreationTimestamp="2025-11-23 08:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:40.206346214 +0000 UTC m=+5632.514858977" watchObservedRunningTime="2025-11-23 08:19:40.208907776 +0000 UTC m=+5632.517420539" Nov 23 08:19:44 crc kubenswrapper[4988]: I1123 08:19:44.236954 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdkz6" event={"ID":"180e5406-f841-4339-85e9-115029643be8","Type":"ContainerStarted","Data":"e15b0f60fbc0d1914471bc0ce072e8d960e7a7d854d82ffff1f888a1595b5d95"} Nov 23 08:19:44 crc kubenswrapper[4988]: I1123 08:19:44.279039 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rdkz6" podStartSLOduration=2.248741691 podStartE2EDuration="7.27901703s" podCreationTimestamp="2025-11-23 08:19:37 +0000 UTC" firstStartedPulling="2025-11-23 08:19:38.265822679 +0000 UTC m=+5630.574335442" lastFinishedPulling="2025-11-23 08:19:43.296098018 +0000 UTC m=+5635.604610781" observedRunningTime="2025-11-23 08:19:44.263115872 +0000 UTC m=+5636.571628675" watchObservedRunningTime="2025-11-23 08:19:44.27901703 +0000 UTC m=+5636.587529803" Nov 23 08:19:45 crc kubenswrapper[4988]: I1123 08:19:45.249479 4988 generic.go:334] "Generic (PLEG): container finished" podID="180e5406-f841-4339-85e9-115029643be8" containerID="e15b0f60fbc0d1914471bc0ce072e8d960e7a7d854d82ffff1f888a1595b5d95" exitCode=0 Nov 23 08:19:45 crc kubenswrapper[4988]: I1123 08:19:45.249561 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdkz6" event={"ID":"180e5406-f841-4339-85e9-115029643be8","Type":"ContainerDied","Data":"e15b0f60fbc0d1914471bc0ce072e8d960e7a7d854d82ffff1f888a1595b5d95"} Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.630135 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.749309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle\") pod \"180e5406-f841-4339-85e9-115029643be8\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.749406 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts\") pod \"180e5406-f841-4339-85e9-115029643be8\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.749439 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data\") pod \"180e5406-f841-4339-85e9-115029643be8\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.749501 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs\") pod \"180e5406-f841-4339-85e9-115029643be8\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.749655 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jj85\" (UniqueName: \"kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85\") pod \"180e5406-f841-4339-85e9-115029643be8\" (UID: \"180e5406-f841-4339-85e9-115029643be8\") " Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.750254 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs" (OuterVolumeSpecName: "logs") pod "180e5406-f841-4339-85e9-115029643be8" (UID: "180e5406-f841-4339-85e9-115029643be8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.750799 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/180e5406-f841-4339-85e9-115029643be8-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.755339 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts" (OuterVolumeSpecName: "scripts") pod "180e5406-f841-4339-85e9-115029643be8" (UID: "180e5406-f841-4339-85e9-115029643be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.755615 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85" (OuterVolumeSpecName: "kube-api-access-6jj85") pod "180e5406-f841-4339-85e9-115029643be8" (UID: "180e5406-f841-4339-85e9-115029643be8"). InnerVolumeSpecName "kube-api-access-6jj85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.775842 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "180e5406-f841-4339-85e9-115029643be8" (UID: "180e5406-f841-4339-85e9-115029643be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.775859 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data" (OuterVolumeSpecName: "config-data") pod "180e5406-f841-4339-85e9-115029643be8" (UID: "180e5406-f841-4339-85e9-115029643be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.852546 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jj85\" (UniqueName: \"kubernetes.io/projected/180e5406-f841-4339-85e9-115029643be8-kube-api-access-6jj85\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.852604 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.852617 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:46 crc kubenswrapper[4988]: I1123 08:19:46.852626 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180e5406-f841-4339-85e9-115029643be8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.268572 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rdkz6" event={"ID":"180e5406-f841-4339-85e9-115029643be8","Type":"ContainerDied","Data":"94c272d244327c4515f3cf835b738ce6325c3c6c9a6f9b6b83570d810e8a0537"} Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.268624 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c272d244327c4515f3cf835b738ce6325c3c6c9a6f9b6b83570d810e8a0537" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.268685 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rdkz6" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.357084 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-56b7cb9d84-l6fj5"] Nov 23 08:19:47 crc kubenswrapper[4988]: E1123 08:19:47.357535 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180e5406-f841-4339-85e9-115029643be8" containerName="placement-db-sync" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.357552 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="180e5406-f841-4339-85e9-115029643be8" containerName="placement-db-sync" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.357720 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="180e5406-f841-4339-85e9-115029643be8" containerName="placement-db-sync" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.358680 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.361915 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.362799 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.363019 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-qwmps" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.363098 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.363103 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.366452 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56b7cb9d84-l6fj5"] Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461584 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-public-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461727 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b03ea7-bd7c-488c-bc46-ba93c7029243-logs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461817 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-internal-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461871 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-config-data\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461929 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-scripts\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bjq\" (UniqueName: \"kubernetes.io/projected/58b03ea7-bd7c-488c-bc46-ba93c7029243-kube-api-access-76bjq\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.461984 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-combined-ca-bundle\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.564239 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-public-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.564462 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b03ea7-bd7c-488c-bc46-ba93c7029243-logs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.565251 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58b03ea7-bd7c-488c-bc46-ba93c7029243-logs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.566835 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-internal-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.566910 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-config-data\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.566996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-scripts\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.567303 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76bjq\" (UniqueName: \"kubernetes.io/projected/58b03ea7-bd7c-488c-bc46-ba93c7029243-kube-api-access-76bjq\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.567966 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-combined-ca-bundle\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.570515 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-internal-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.571384 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-config-data\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.572428 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-scripts\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.572919 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-public-tls-certs\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.574583 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58b03ea7-bd7c-488c-bc46-ba93c7029243-combined-ca-bundle\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.593113 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76bjq\" (UniqueName: \"kubernetes.io/projected/58b03ea7-bd7c-488c-bc46-ba93c7029243-kube-api-access-76bjq\") pod \"placement-56b7cb9d84-l6fj5\" (UID: \"58b03ea7-bd7c-488c-bc46-ba93c7029243\") " pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.681954 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.683587 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.782453 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:47 crc kubenswrapper[4988]: I1123 08:19:47.782905 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="dnsmasq-dns" containerID="cri-o://1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b" gracePeriod=10 Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.201784 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-56b7cb9d84-l6fj5"] Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.202736 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:48 crc kubenswrapper[4988]: W1123 08:19:48.211882 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58b03ea7_bd7c_488c_bc46_ba93c7029243.slice/crio-5762f6d12a50d3801241265c6930f5578107c47552b20e34e07262c832f178e7 WatchSource:0}: Error finding container 5762f6d12a50d3801241265c6930f5578107c47552b20e34e07262c832f178e7: Status 404 returned error can't find the container with id 5762f6d12a50d3801241265c6930f5578107c47552b20e34e07262c832f178e7 Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.277181 4988 generic.go:334] "Generic (PLEG): container finished" podID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerID="1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b" exitCode=0 Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.277235 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" event={"ID":"068790ce-d14c-4234-b1ec-82369d7eae6d","Type":"ContainerDied","Data":"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b"} Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.277259 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.277285 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7798c5777-4qdlx" event={"ID":"068790ce-d14c-4234-b1ec-82369d7eae6d","Type":"ContainerDied","Data":"961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff"} Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.277304 4988 scope.go:117] "RemoveContainer" containerID="1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.279130 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56b7cb9d84-l6fj5" event={"ID":"58b03ea7-bd7c-488c-bc46-ba93c7029243","Type":"ContainerStarted","Data":"5762f6d12a50d3801241265c6930f5578107c47552b20e34e07262c832f178e7"} Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.281906 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config\") pod \"068790ce-d14c-4234-b1ec-82369d7eae6d\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.281959 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb\") pod \"068790ce-d14c-4234-b1ec-82369d7eae6d\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.282074 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4dpt\" (UniqueName: \"kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt\") pod \"068790ce-d14c-4234-b1ec-82369d7eae6d\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.282129 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc\") pod \"068790ce-d14c-4234-b1ec-82369d7eae6d\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.285074 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb\") pod \"068790ce-d14c-4234-b1ec-82369d7eae6d\" (UID: \"068790ce-d14c-4234-b1ec-82369d7eae6d\") " Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.285931 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt" (OuterVolumeSpecName: "kube-api-access-k4dpt") pod "068790ce-d14c-4234-b1ec-82369d7eae6d" (UID: "068790ce-d14c-4234-b1ec-82369d7eae6d"). InnerVolumeSpecName "kube-api-access-k4dpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.286262 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4dpt\" (UniqueName: \"kubernetes.io/projected/068790ce-d14c-4234-b1ec-82369d7eae6d-kube-api-access-k4dpt\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.301067 4988 scope.go:117] "RemoveContainer" containerID="b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.326395 4988 scope.go:117] "RemoveContainer" containerID="1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b" Nov 23 08:19:48 crc kubenswrapper[4988]: E1123 08:19:48.326820 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b\": container with ID starting with 1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b not found: ID does not exist" containerID="1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.326873 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b"} err="failed to get container status \"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b\": rpc error: code = NotFound desc = could not find container \"1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b\": container with ID starting with 1101b325dbfab25bc560ed4843fad9eb1de9d27143a13a2630277be87c28f20b not found: ID does not exist" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.326905 4988 scope.go:117] "RemoveContainer" containerID="b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64" Nov 23 08:19:48 crc kubenswrapper[4988]: E1123 08:19:48.327247 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64\": container with ID starting with b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64 not found: ID does not exist" containerID="b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.327293 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64"} err="failed to get container status \"b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64\": rpc error: code = NotFound desc = could not find container \"b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64\": container with ID starting with b3f43fc9fd0c51e674ed7622525282e613d40b464019bc576c6f8c0fac850f64 not found: ID does not exist" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.328452 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "068790ce-d14c-4234-b1ec-82369d7eae6d" (UID: "068790ce-d14c-4234-b1ec-82369d7eae6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.343885 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "068790ce-d14c-4234-b1ec-82369d7eae6d" (UID: "068790ce-d14c-4234-b1ec-82369d7eae6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.346184 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "068790ce-d14c-4234-b1ec-82369d7eae6d" (UID: "068790ce-d14c-4234-b1ec-82369d7eae6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.352964 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config" (OuterVolumeSpecName: "config") pod "068790ce-d14c-4234-b1ec-82369d7eae6d" (UID: "068790ce-d14c-4234-b1ec-82369d7eae6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.388332 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.388362 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.388373 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.388398 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/068790ce-d14c-4234-b1ec-82369d7eae6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.700490 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:48 crc kubenswrapper[4988]: I1123 08:19:48.723021 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7798c5777-4qdlx"] Nov 23 08:19:49 crc kubenswrapper[4988]: I1123 08:19:49.294030 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56b7cb9d84-l6fj5" event={"ID":"58b03ea7-bd7c-488c-bc46-ba93c7029243","Type":"ContainerStarted","Data":"a17ed2f385ebc4a75676e06a76e2c1a3f257e68222ec8e07b09b4ce30918b862"} Nov 23 08:19:49 crc kubenswrapper[4988]: I1123 08:19:49.294098 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-56b7cb9d84-l6fj5" event={"ID":"58b03ea7-bd7c-488c-bc46-ba93c7029243","Type":"ContainerStarted","Data":"501ee82c16f4f4d50246a240d3a4955d7f188991461d526a0ffcaa608552970c"} Nov 23 08:19:49 crc kubenswrapper[4988]: I1123 08:19:49.294876 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:49 crc kubenswrapper[4988]: I1123 08:19:49.294922 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:19:49 crc kubenswrapper[4988]: I1123 08:19:49.319077 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-56b7cb9d84-l6fj5" podStartSLOduration=2.319009037 podStartE2EDuration="2.319009037s" podCreationTimestamp="2025-11-23 08:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:19:49.316048775 +0000 UTC m=+5641.624561588" watchObservedRunningTime="2025-11-23 08:19:49.319009037 +0000 UTC m=+5641.627521850" Nov 23 08:19:50 crc kubenswrapper[4988]: I1123 08:19:50.516005 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" path="/var/lib/kubelet/pods/068790ce-d14c-4234-b1ec-82369d7eae6d/volumes" Nov 23 08:19:51 crc kubenswrapper[4988]: I1123 08:19:51.672698 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:19:51 crc kubenswrapper[4988]: I1123 08:19:51.672777 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:19:51 crc kubenswrapper[4988]: I1123 08:19:51.672832 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:19:51 crc kubenswrapper[4988]: I1123 08:19:51.673567 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:19:51 crc kubenswrapper[4988]: I1123 08:19:51.673637 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c" gracePeriod=600 Nov 23 08:19:52 crc kubenswrapper[4988]: I1123 08:19:52.332869 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c" exitCode=0 Nov 23 08:19:52 crc kubenswrapper[4988]: I1123 08:19:52.333435 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c"} Nov 23 08:19:52 crc kubenswrapper[4988]: I1123 08:19:52.333482 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84"} Nov 23 08:19:52 crc kubenswrapper[4988]: I1123 08:19:52.333574 4988 scope.go:117] "RemoveContainer" containerID="1d29259cb437220b682886a0bd11f43cb7a3da6b3f3a3fdb97dc12cef9e7b6b9" Nov 23 08:19:58 crc kubenswrapper[4988]: E1123 08:19:58.892027 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff\": RecentStats: unable to find data in memory cache]" Nov 23 08:20:09 crc kubenswrapper[4988]: E1123 08:20:09.158298 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice\": RecentStats: unable to find data in memory cache]" Nov 23 08:20:18 crc kubenswrapper[4988]: I1123 08:20:18.816188 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:20:18 crc kubenswrapper[4988]: I1123 08:20:18.913011 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-56b7cb9d84-l6fj5" Nov 23 08:20:19 crc kubenswrapper[4988]: E1123 08:20:19.369527 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice\": RecentStats: unable to find data in memory cache]" Nov 23 08:20:29 crc kubenswrapper[4988]: E1123 08:20:29.619035 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice\": RecentStats: unable to find data in memory cache]" Nov 23 08:20:39 crc kubenswrapper[4988]: E1123 08:20:39.850535 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068790ce_d14c_4234_b1ec_82369d7eae6d.slice/crio-961e6a3101db540dbf52c2df54d513081342a392b3714a4540ad44e52694fbff\": RecentStats: unable to find data in memory cache]" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.711359 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-65xgq"] Nov 23 08:20:40 crc kubenswrapper[4988]: E1123 08:20:40.711859 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="dnsmasq-dns" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.711880 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="dnsmasq-dns" Nov 23 08:20:40 crc kubenswrapper[4988]: E1123 08:20:40.711900 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="init" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.711908 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="init" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.712140 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="068790ce-d14c-4234-b1ec-82369d7eae6d" containerName="dnsmasq-dns" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.713005 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.724983 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-65xgq"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.793966 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dhk9r"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.795070 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.811671 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dhk9r"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.822765 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.822868 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkndz\" (UniqueName: \"kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.898474 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6104-account-create-rfblv"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.899474 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.901006 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.908711 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6104-account-create-rfblv"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.924367 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.924440 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.924462 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtdc\" (UniqueName: \"kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.924525 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkndz\" (UniqueName: \"kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.925224 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.943637 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkndz\" (UniqueName: \"kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz\") pod \"nova-api-db-create-65xgq\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.997656 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-drb5f"] Nov 23 08:20:40 crc kubenswrapper[4988]: I1123 08:20:40.998773 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.008681 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drb5f"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.025999 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2srk\" (UniqueName: \"kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.026086 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.026172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxtdc\" (UniqueName: \"kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.026257 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.027432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.031909 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.047211 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxtdc\" (UniqueName: \"kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc\") pod \"nova-cell0-db-create-dhk9r\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.102439 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0198-account-create-bh7n5"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.103418 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.109934 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.114221 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.127497 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0198-account-create-bh7n5"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.133314 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2srk\" (UniqueName: \"kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.136294 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.136367 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9c4\" (UniqueName: \"kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.136486 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.137488 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.182804 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2srk\" (UniqueName: \"kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk\") pod \"nova-api-6104-account-create-rfblv\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.213594 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.239760 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9bs\" (UniqueName: \"kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.239895 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.239955 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.239980 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9c4\" (UniqueName: \"kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.241171 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.256477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9c4\" (UniqueName: \"kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4\") pod \"nova-cell1-db-create-drb5f\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.311489 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7ef9-account-create-wkglm"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.312747 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.317362 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.317953 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.321824 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ef9-account-create-wkglm"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.341133 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g9bs\" (UniqueName: \"kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.341268 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.341992 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.365981 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g9bs\" (UniqueName: \"kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs\") pod \"nova-cell0-0198-account-create-bh7n5\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.443430 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg6px\" (UniqueName: \"kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.443512 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.545869 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg6px\" (UniqueName: \"kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.546330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.547108 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.551154 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.562756 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg6px\" (UniqueName: \"kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px\") pod \"nova-cell1-7ef9-account-create-wkglm\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.582149 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-65xgq"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.630777 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.640681 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dhk9r"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.727061 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6104-account-create-rfblv"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.818407 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drb5f"] Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.867135 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dhk9r" event={"ID":"cd53de0e-1cb3-4d38-a347-2865a4d88cd8","Type":"ContainerStarted","Data":"492364d71b5ed569e5a005b5185778372cd4be88a33d2c29a9781f1b8ef56ff9"} Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.869033 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6104-account-create-rfblv" event={"ID":"1000af15-4604-46b4-a1a4-fc06decac5b7","Type":"ContainerStarted","Data":"43018cc43ccfb4c046cf21f43a76283c5fa544ec1540ff0ee23389145266af52"} Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.870519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65xgq" event={"ID":"883a08ee-d762-4066-86b2-e7274fe3e003","Type":"ContainerStarted","Data":"32f8bc967e3acab5a015027aef815378d7e558241c30a5d86dc3d589b7d89562"} Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.870563 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65xgq" event={"ID":"883a08ee-d762-4066-86b2-e7274fe3e003","Type":"ContainerStarted","Data":"1a0460cd248e0be3da54cd95498e2d4b95103d078ecb9aba92d62f63021d2d5a"} Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.874016 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drb5f" event={"ID":"93f22cd7-cd91-4eb0-8ea6-06e862540405","Type":"ContainerStarted","Data":"dc5f8df3baffe674621bf33b749c80db470fd7be493abf387b79d6702eb3e4a8"} Nov 23 08:20:41 crc kubenswrapper[4988]: I1123 08:20:41.888182 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-65xgq" podStartSLOduration=1.888151782 podStartE2EDuration="1.888151782s" podCreationTimestamp="2025-11-23 08:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:20:41.885422836 +0000 UTC m=+5694.193935599" watchObservedRunningTime="2025-11-23 08:20:41.888151782 +0000 UTC m=+5694.196664545" Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.025709 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0198-account-create-bh7n5"] Nov 23 08:20:42 crc kubenswrapper[4988]: W1123 08:20:42.033682 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbefb0fb0_05b2_4ca5_9613_658701028051.slice/crio-8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71 WatchSource:0}: Error finding container 8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71: Status 404 returned error can't find the container with id 8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.141876 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7ef9-account-create-wkglm"] Nov 23 08:20:42 crc kubenswrapper[4988]: W1123 08:20:42.144009 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod678fca6f_8d7a_41d6_b3e5_7bc209192c12.slice/crio-d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9 WatchSource:0}: Error finding container d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9: Status 404 returned error can't find the container with id d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.882551 4988 generic.go:334] "Generic (PLEG): container finished" podID="1000af15-4604-46b4-a1a4-fc06decac5b7" containerID="a673339287ab2ce33ccd4afc760d1d3804d3794a111e5b5dc0c16ef28c93bdb7" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.882586 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6104-account-create-rfblv" event={"ID":"1000af15-4604-46b4-a1a4-fc06decac5b7","Type":"ContainerDied","Data":"a673339287ab2ce33ccd4afc760d1d3804d3794a111e5b5dc0c16ef28c93bdb7"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.885581 4988 generic.go:334] "Generic (PLEG): container finished" podID="883a08ee-d762-4066-86b2-e7274fe3e003" containerID="32f8bc967e3acab5a015027aef815378d7e558241c30a5d86dc3d589b7d89562" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.885649 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65xgq" event={"ID":"883a08ee-d762-4066-86b2-e7274fe3e003","Type":"ContainerDied","Data":"32f8bc967e3acab5a015027aef815378d7e558241c30a5d86dc3d589b7d89562"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.887490 4988 generic.go:334] "Generic (PLEG): container finished" podID="befb0fb0-05b2-4ca5-9613-658701028051" containerID="b04a6f90c0d7b7cea63a84db825f4d9b0e100008469dc9b6df182bddfef1d4a2" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.887593 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0198-account-create-bh7n5" event={"ID":"befb0fb0-05b2-4ca5-9613-658701028051","Type":"ContainerDied","Data":"b04a6f90c0d7b7cea63a84db825f4d9b0e100008469dc9b6df182bddfef1d4a2"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.887626 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0198-account-create-bh7n5" event={"ID":"befb0fb0-05b2-4ca5-9613-658701028051","Type":"ContainerStarted","Data":"8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.889657 4988 generic.go:334] "Generic (PLEG): container finished" podID="93f22cd7-cd91-4eb0-8ea6-06e862540405" containerID="efb6a246d5dd027c159803f0f4421eaa8e358936b80f150d44e9f5887fd175b8" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.889726 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drb5f" event={"ID":"93f22cd7-cd91-4eb0-8ea6-06e862540405","Type":"ContainerDied","Data":"efb6a246d5dd027c159803f0f4421eaa8e358936b80f150d44e9f5887fd175b8"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.893551 4988 generic.go:334] "Generic (PLEG): container finished" podID="cd53de0e-1cb3-4d38-a347-2865a4d88cd8" containerID="01517096719b60a4207e953ca66513dbf855af601a377d742b1c0fcdefb72d6d" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.893644 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dhk9r" event={"ID":"cd53de0e-1cb3-4d38-a347-2865a4d88cd8","Type":"ContainerDied","Data":"01517096719b60a4207e953ca66513dbf855af601a377d742b1c0fcdefb72d6d"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.897731 4988 generic.go:334] "Generic (PLEG): container finished" podID="678fca6f-8d7a-41d6-b3e5-7bc209192c12" containerID="7fd84f13d44381eb72ac203eb3a37f48bcfd2e36e4d03bca00e1b1dcb55b2606" exitCode=0 Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.897783 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ef9-account-create-wkglm" event={"ID":"678fca6f-8d7a-41d6-b3e5-7bc209192c12","Type":"ContainerDied","Data":"7fd84f13d44381eb72ac203eb3a37f48bcfd2e36e4d03bca00e1b1dcb55b2606"} Nov 23 08:20:42 crc kubenswrapper[4988]: I1123 08:20:42.897812 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ef9-account-create-wkglm" event={"ID":"678fca6f-8d7a-41d6-b3e5-7bc209192c12","Type":"ContainerStarted","Data":"d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.280367 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.414744 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts\") pod \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.415110 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxtdc\" (UniqueName: \"kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc\") pod \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\" (UID: \"cd53de0e-1cb3-4d38-a347-2865a4d88cd8\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.415414 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd53de0e-1cb3-4d38-a347-2865a4d88cd8" (UID: "cd53de0e-1cb3-4d38-a347-2865a4d88cd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.417023 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.420813 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc" (OuterVolumeSpecName: "kube-api-access-pxtdc") pod "cd53de0e-1cb3-4d38-a347-2865a4d88cd8" (UID: "cd53de0e-1cb3-4d38-a347-2865a4d88cd8"). InnerVolumeSpecName "kube-api-access-pxtdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.480225 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.486464 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.498252 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.518575 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxtdc\" (UniqueName: \"kubernetes.io/projected/cd53de0e-1cb3-4d38-a347-2865a4d88cd8-kube-api-access-pxtdc\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.519088 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.522779 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.619467 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts\") pod \"1000af15-4604-46b4-a1a4-fc06decac5b7\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.619763 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g9bs\" (UniqueName: \"kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs\") pod \"befb0fb0-05b2-4ca5-9613-658701028051\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.619870 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2srk\" (UniqueName: \"kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk\") pod \"1000af15-4604-46b4-a1a4-fc06decac5b7\" (UID: \"1000af15-4604-46b4-a1a4-fc06decac5b7\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620016 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts\") pod \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.619937 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1000af15-4604-46b4-a1a4-fc06decac5b7" (UID: "1000af15-4604-46b4-a1a4-fc06decac5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620104 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts\") pod \"883a08ee-d762-4066-86b2-e7274fe3e003\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620277 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts\") pod \"befb0fb0-05b2-4ca5-9613-658701028051\" (UID: \"befb0fb0-05b2-4ca5-9613-658701028051\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620412 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg6px\" (UniqueName: \"kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px\") pod \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\" (UID: \"678fca6f-8d7a-41d6-b3e5-7bc209192c12\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620447 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkndz\" (UniqueName: \"kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz\") pod \"883a08ee-d762-4066-86b2-e7274fe3e003\" (UID: \"883a08ee-d762-4066-86b2-e7274fe3e003\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620552 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "678fca6f-8d7a-41d6-b3e5-7bc209192c12" (UID: "678fca6f-8d7a-41d6-b3e5-7bc209192c12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.620986 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "883a08ee-d762-4066-86b2-e7274fe3e003" (UID: "883a08ee-d762-4066-86b2-e7274fe3e003"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.621124 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "befb0fb0-05b2-4ca5-9613-658701028051" (UID: "befb0fb0-05b2-4ca5-9613-658701028051"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.622525 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1000af15-4604-46b4-a1a4-fc06decac5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.622551 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678fca6f-8d7a-41d6-b3e5-7bc209192c12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.622563 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883a08ee-d762-4066-86b2-e7274fe3e003-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.622573 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/befb0fb0-05b2-4ca5-9613-658701028051-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.623409 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs" (OuterVolumeSpecName: "kube-api-access-8g9bs") pod "befb0fb0-05b2-4ca5-9613-658701028051" (UID: "befb0fb0-05b2-4ca5-9613-658701028051"). InnerVolumeSpecName "kube-api-access-8g9bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.623450 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz" (OuterVolumeSpecName: "kube-api-access-fkndz") pod "883a08ee-d762-4066-86b2-e7274fe3e003" (UID: "883a08ee-d762-4066-86b2-e7274fe3e003"). InnerVolumeSpecName "kube-api-access-fkndz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.624218 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px" (OuterVolumeSpecName: "kube-api-access-wg6px") pod "678fca6f-8d7a-41d6-b3e5-7bc209192c12" (UID: "678fca6f-8d7a-41d6-b3e5-7bc209192c12"). InnerVolumeSpecName "kube-api-access-wg6px". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.626145 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk" (OuterVolumeSpecName: "kube-api-access-h2srk") pod "1000af15-4604-46b4-a1a4-fc06decac5b7" (UID: "1000af15-4604-46b4-a1a4-fc06decac5b7"). InnerVolumeSpecName "kube-api-access-h2srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.723566 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts\") pod \"93f22cd7-cd91-4eb0-8ea6-06e862540405\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.723884 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf9c4\" (UniqueName: \"kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4\") pod \"93f22cd7-cd91-4eb0-8ea6-06e862540405\" (UID: \"93f22cd7-cd91-4eb0-8ea6-06e862540405\") " Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724226 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93f22cd7-cd91-4eb0-8ea6-06e862540405" (UID: "93f22cd7-cd91-4eb0-8ea6-06e862540405"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724639 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g9bs\" (UniqueName: \"kubernetes.io/projected/befb0fb0-05b2-4ca5-9613-658701028051-kube-api-access-8g9bs\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724660 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2srk\" (UniqueName: \"kubernetes.io/projected/1000af15-4604-46b4-a1a4-fc06decac5b7-kube-api-access-h2srk\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724669 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg6px\" (UniqueName: \"kubernetes.io/projected/678fca6f-8d7a-41d6-b3e5-7bc209192c12-kube-api-access-wg6px\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724678 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkndz\" (UniqueName: \"kubernetes.io/projected/883a08ee-d762-4066-86b2-e7274fe3e003-kube-api-access-fkndz\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.724687 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93f22cd7-cd91-4eb0-8ea6-06e862540405-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.728559 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4" (OuterVolumeSpecName: "kube-api-access-mf9c4") pod "93f22cd7-cd91-4eb0-8ea6-06e862540405" (UID: "93f22cd7-cd91-4eb0-8ea6-06e862540405"). InnerVolumeSpecName "kube-api-access-mf9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.827635 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf9c4\" (UniqueName: \"kubernetes.io/projected/93f22cd7-cd91-4eb0-8ea6-06e862540405-kube-api-access-mf9c4\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.930921 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7ef9-account-create-wkglm" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.930924 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7ef9-account-create-wkglm" event={"ID":"678fca6f-8d7a-41d6-b3e5-7bc209192c12","Type":"ContainerDied","Data":"d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.931251 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d784d36fd0d4900dc12c0bd820c1ceed8bae489232ce6eb3c737d0a8051bfad9" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.933048 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6104-account-create-rfblv" event={"ID":"1000af15-4604-46b4-a1a4-fc06decac5b7","Type":"ContainerDied","Data":"43018cc43ccfb4c046cf21f43a76283c5fa544ec1540ff0ee23389145266af52"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.933083 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43018cc43ccfb4c046cf21f43a76283c5fa544ec1540ff0ee23389145266af52" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.933100 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6104-account-create-rfblv" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.935077 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-65xgq" event={"ID":"883a08ee-d762-4066-86b2-e7274fe3e003","Type":"ContainerDied","Data":"1a0460cd248e0be3da54cd95498e2d4b95103d078ecb9aba92d62f63021d2d5a"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.935100 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0460cd248e0be3da54cd95498e2d4b95103d078ecb9aba92d62f63021d2d5a" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.935168 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-65xgq" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.948317 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0198-account-create-bh7n5" event={"ID":"befb0fb0-05b2-4ca5-9613-658701028051","Type":"ContainerDied","Data":"8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.948353 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d20c1f1bee7e049a3f878649a25a1f4279ccef0889c62bcc4f45c084536ca71" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.948489 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0198-account-create-bh7n5" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.952144 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drb5f" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.952144 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drb5f" event={"ID":"93f22cd7-cd91-4eb0-8ea6-06e862540405","Type":"ContainerDied","Data":"dc5f8df3baffe674621bf33b749c80db470fd7be493abf387b79d6702eb3e4a8"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.952359 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5f8df3baffe674621bf33b749c80db470fd7be493abf387b79d6702eb3e4a8" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.964731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dhk9r" event={"ID":"cd53de0e-1cb3-4d38-a347-2865a4d88cd8","Type":"ContainerDied","Data":"492364d71b5ed569e5a005b5185778372cd4be88a33d2c29a9781f1b8ef56ff9"} Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.964778 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492364d71b5ed569e5a005b5185778372cd4be88a33d2c29a9781f1b8ef56ff9" Nov 23 08:20:44 crc kubenswrapper[4988]: I1123 08:20:44.964842 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dhk9r" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.383260 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t6mmp"] Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384011 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd53de0e-1cb3-4d38-a347-2865a4d88cd8" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384025 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd53de0e-1cb3-4d38-a347-2865a4d88cd8" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384045 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1000af15-4604-46b4-a1a4-fc06decac5b7" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384051 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1000af15-4604-46b4-a1a4-fc06decac5b7" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384065 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678fca6f-8d7a-41d6-b3e5-7bc209192c12" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384072 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="678fca6f-8d7a-41d6-b3e5-7bc209192c12" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384089 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f22cd7-cd91-4eb0-8ea6-06e862540405" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384094 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f22cd7-cd91-4eb0-8ea6-06e862540405" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384105 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883a08ee-d762-4066-86b2-e7274fe3e003" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384110 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="883a08ee-d762-4066-86b2-e7274fe3e003" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: E1123 08:20:46.384126 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="befb0fb0-05b2-4ca5-9613-658701028051" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384132 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="befb0fb0-05b2-4ca5-9613-658701028051" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384302 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1000af15-4604-46b4-a1a4-fc06decac5b7" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384315 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd53de0e-1cb3-4d38-a347-2865a4d88cd8" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384326 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="befb0fb0-05b2-4ca5-9613-658701028051" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384338 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="678fca6f-8d7a-41d6-b3e5-7bc209192c12" containerName="mariadb-account-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384350 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="883a08ee-d762-4066-86b2-e7274fe3e003" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.384360 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f22cd7-cd91-4eb0-8ea6-06e862540405" containerName="mariadb-database-create" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.385117 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.394028 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.394099 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.394424 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r88wc" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.399526 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t6mmp"] Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.491758 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.491866 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.491954 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.491980 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4xfg\" (UniqueName: \"kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.593703 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.593735 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4xfg\" (UniqueName: \"kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.593808 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.593862 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.599545 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.599855 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.604237 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.612624 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4xfg\" (UniqueName: \"kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg\") pod \"nova-cell0-conductor-db-sync-t6mmp\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:46 crc kubenswrapper[4988]: I1123 08:20:46.727564 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:20:47 crc kubenswrapper[4988]: I1123 08:20:47.184716 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t6mmp"] Nov 23 08:20:47 crc kubenswrapper[4988]: I1123 08:20:47.998263 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" event={"ID":"03cebd48-64f9-4237-a919-90259adcd22f","Type":"ContainerStarted","Data":"04da605442f88c7a1259a07c0729eedbf6cc5ca578544592efe9599583544859"} Nov 23 08:20:48 crc kubenswrapper[4988]: E1123 08:20:48.535900 4988 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4c784d9a8904850b85a341c839f676ae1281040fe732f68de4bfde150269f817/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4c784d9a8904850b85a341c839f676ae1281040fe732f68de4bfde150269f817/diff: no such file or directory, extraDiskErr: Nov 23 08:20:56 crc kubenswrapper[4988]: I1123 08:20:56.071217 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" event={"ID":"03cebd48-64f9-4237-a919-90259adcd22f","Type":"ContainerStarted","Data":"3cac82a215c408489b0655ecdc15ca7f35c2f255d9afe789c31f12ab112c07fa"} Nov 23 08:20:56 crc kubenswrapper[4988]: I1123 08:20:56.096180 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" podStartSLOduration=1.657394961 podStartE2EDuration="10.096161113s" podCreationTimestamp="2025-11-23 08:20:46 +0000 UTC" firstStartedPulling="2025-11-23 08:20:47.18890036 +0000 UTC m=+5699.497413123" lastFinishedPulling="2025-11-23 08:20:55.627666512 +0000 UTC m=+5707.936179275" observedRunningTime="2025-11-23 08:20:56.090678209 +0000 UTC m=+5708.399190992" watchObservedRunningTime="2025-11-23 08:20:56.096161113 +0000 UTC m=+5708.404673886" Nov 23 08:20:58 crc kubenswrapper[4988]: I1123 08:20:58.302432 4988 scope.go:117] "RemoveContainer" containerID="aae762b6c8986ec19cfe8b9431684dc1961b9de235dd0bef2636e1c325c85ed6" Nov 23 08:20:58 crc kubenswrapper[4988]: I1123 08:20:58.339750 4988 scope.go:117] "RemoveContainer" containerID="c3b137f4895b262be98ee8b0ffb048ef1a40f4f54a1dfcf9e253c5bfd2e04431" Nov 23 08:21:03 crc kubenswrapper[4988]: I1123 08:21:03.162344 4988 generic.go:334] "Generic (PLEG): container finished" podID="03cebd48-64f9-4237-a919-90259adcd22f" containerID="3cac82a215c408489b0655ecdc15ca7f35c2f255d9afe789c31f12ab112c07fa" exitCode=0 Nov 23 08:21:03 crc kubenswrapper[4988]: I1123 08:21:03.162436 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" event={"ID":"03cebd48-64f9-4237-a919-90259adcd22f","Type":"ContainerDied","Data":"3cac82a215c408489b0655ecdc15ca7f35c2f255d9afe789c31f12ab112c07fa"} Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.459162 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.555858 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4xfg\" (UniqueName: \"kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg\") pod \"03cebd48-64f9-4237-a919-90259adcd22f\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.556046 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts\") pod \"03cebd48-64f9-4237-a919-90259adcd22f\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.556128 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle\") pod \"03cebd48-64f9-4237-a919-90259adcd22f\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.556225 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data\") pod \"03cebd48-64f9-4237-a919-90259adcd22f\" (UID: \"03cebd48-64f9-4237-a919-90259adcd22f\") " Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.564666 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg" (OuterVolumeSpecName: "kube-api-access-d4xfg") pod "03cebd48-64f9-4237-a919-90259adcd22f" (UID: "03cebd48-64f9-4237-a919-90259adcd22f"). InnerVolumeSpecName "kube-api-access-d4xfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.570333 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts" (OuterVolumeSpecName: "scripts") pod "03cebd48-64f9-4237-a919-90259adcd22f" (UID: "03cebd48-64f9-4237-a919-90259adcd22f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.582562 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03cebd48-64f9-4237-a919-90259adcd22f" (UID: "03cebd48-64f9-4237-a919-90259adcd22f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.583906 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data" (OuterVolumeSpecName: "config-data") pod "03cebd48-64f9-4237-a919-90259adcd22f" (UID: "03cebd48-64f9-4237-a919-90259adcd22f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.658626 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.659129 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.659157 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cebd48-64f9-4237-a919-90259adcd22f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:04 crc kubenswrapper[4988]: I1123 08:21:04.659169 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4xfg\" (UniqueName: \"kubernetes.io/projected/03cebd48-64f9-4237-a919-90259adcd22f-kube-api-access-d4xfg\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.192120 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" event={"ID":"03cebd48-64f9-4237-a919-90259adcd22f","Type":"ContainerDied","Data":"04da605442f88c7a1259a07c0729eedbf6cc5ca578544592efe9599583544859"} Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.192261 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04da605442f88c7a1259a07c0729eedbf6cc5ca578544592efe9599583544859" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.192164 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t6mmp" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.281816 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:21:05 crc kubenswrapper[4988]: E1123 08:21:05.282245 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cebd48-64f9-4237-a919-90259adcd22f" containerName="nova-cell0-conductor-db-sync" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.282263 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cebd48-64f9-4237-a919-90259adcd22f" containerName="nova-cell0-conductor-db-sync" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.282479 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cebd48-64f9-4237-a919-90259adcd22f" containerName="nova-cell0-conductor-db-sync" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.283092 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.285222 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.285332 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r88wc" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.295089 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.372625 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.372760 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dzq\" (UniqueName: \"kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.372804 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.475140 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dzq\" (UniqueName: \"kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.475426 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.475577 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.482894 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.493152 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.497939 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dzq\" (UniqueName: \"kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq\") pod \"nova-cell0-conductor-0\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:05 crc kubenswrapper[4988]: I1123 08:21:05.612056 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:06 crc kubenswrapper[4988]: I1123 08:21:06.103036 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:21:06 crc kubenswrapper[4988]: W1123 08:21:06.104046 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc062963f_c5e3_441c_b6a0_76fd001da005.slice/crio-d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834 WatchSource:0}: Error finding container d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834: Status 404 returned error can't find the container with id d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834 Nov 23 08:21:06 crc kubenswrapper[4988]: I1123 08:21:06.201302 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c062963f-c5e3-441c-b6a0-76fd001da005","Type":"ContainerStarted","Data":"d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834"} Nov 23 08:21:07 crc kubenswrapper[4988]: I1123 08:21:07.212768 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c062963f-c5e3-441c-b6a0-76fd001da005","Type":"ContainerStarted","Data":"b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d"} Nov 23 08:21:07 crc kubenswrapper[4988]: I1123 08:21:07.213025 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:07 crc kubenswrapper[4988]: I1123 08:21:07.231431 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.231413664 podStartE2EDuration="2.231413664s" podCreationTimestamp="2025-11-23 08:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:07.229177619 +0000 UTC m=+5719.537690382" watchObservedRunningTime="2025-11-23 08:21:07.231413664 +0000 UTC m=+5719.539926427" Nov 23 08:21:15 crc kubenswrapper[4988]: I1123 08:21:15.664065 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.162137 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4zj4m"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.163513 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.165856 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.165979 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.181992 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zj4m"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.296932 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.298577 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.300694 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.300868 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.300947 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.301033 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk8h9\" (UniqueName: \"kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.301063 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.307130 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.361051 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.362288 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.366803 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.374953 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402396 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk8h9\" (UniqueName: \"kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402472 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402529 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402577 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402644 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.402746 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tqqg\" (UniqueName: \"kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.431836 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.434778 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.446557 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.457851 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk8h9\" (UniqueName: \"kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9\") pod \"nova-cell0-cell-mapping-4zj4m\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.466389 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.475912 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.478371 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.479541 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.505928 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510287 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510361 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgw6\" (UniqueName: \"kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510396 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510461 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tqqg\" (UniqueName: \"kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510485 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.510520 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.526815 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.553465 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.557262 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.568270 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.582222 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.608272 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tqqg\" (UniqueName: \"kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613009 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgw6\" (UniqueName: \"kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613072 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613182 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613242 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdnsq\" (UniqueName: \"kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613285 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613510 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.613541 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.620162 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.622182 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.622991 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.623335 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.625087 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.634542 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.639532 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.644421 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgw6\" (UniqueName: \"kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6\") pod \"nova-scheduler-0\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.719776 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722467 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722518 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722571 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722609 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722625 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722664 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722692 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722741 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdnsq\" (UniqueName: \"kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722772 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722804 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrhp\" (UniqueName: \"kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722829 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrtj\" (UniqueName: \"kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722847 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.722880 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.723289 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.729334 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.731606 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.740744 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdnsq\" (UniqueName: \"kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq\") pod \"nova-metadata-0\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " pod="openstack/nova-metadata-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824015 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824058 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrhp\" (UniqueName: \"kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824094 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vrtj\" (UniqueName: \"kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824122 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824153 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.824232 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.825027 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.825085 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.825128 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.825753 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.826052 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.826085 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.826586 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.826590 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.830184 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.830754 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.846639 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vrtj\" (UniqueName: \"kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj\") pod \"nova-api-0\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " pod="openstack/nova-api-0" Nov 23 08:21:16 crc kubenswrapper[4988]: I1123 08:21:16.862435 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrhp\" (UniqueName: \"kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp\") pod \"dnsmasq-dns-75bcb8b5c7-tc6wz\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.015780 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.033464 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.043425 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zj4m"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.046293 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.183657 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.260489 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7tv69"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.261933 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.264846 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.266751 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.294260 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7tv69"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.365823 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.365970 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.366090 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.366140 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m748z\" (UniqueName: \"kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.391454 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zj4m" event={"ID":"8d4d596f-639f-47c4-8785-6c96ce284f50","Type":"ContainerStarted","Data":"8a94f2866b3a51d312a081d6f3a32254a05de504a542bc8375231161a0d74dc8"} Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.393816 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.395529 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28ac509b-def3-41f0-9ae6-53bf183721ec","Type":"ContainerStarted","Data":"734fc4c7c53a23c36ce30033e48b4806da5aed9f5e1718329dc248971f5a023c"} Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.471124 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.471232 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.471295 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.471336 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m748z\" (UniqueName: \"kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.520051 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.521724 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.521757 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.521888 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m748z\" (UniqueName: \"kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z\") pod \"nova-cell1-conductor-db-sync-7tv69\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.606630 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.649670 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.752650 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:21:17 crc kubenswrapper[4988]: W1123 08:21:17.755599 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc49b62c3_be8d_4852_b42a_87aa039dbf10.slice/crio-931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017 WatchSource:0}: Error finding container 931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017: Status 404 returned error can't find the container with id 931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017 Nov 23 08:21:17 crc kubenswrapper[4988]: I1123 08:21:17.847279 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:17 crc kubenswrapper[4988]: W1123 08:21:17.866840 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe259f8_4f37_416e_8700_361c86767339.slice/crio-cd59ac813854ece614df7f44c2e977719f2d7d35407f4fd2ec7e4289da6216ea WatchSource:0}: Error finding container cd59ac813854ece614df7f44c2e977719f2d7d35407f4fd2ec7e4289da6216ea: Status 404 returned error can't find the container with id cd59ac813854ece614df7f44c2e977719f2d7d35407f4fd2ec7e4289da6216ea Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.046674 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7tv69"] Nov 23 08:21:18 crc kubenswrapper[4988]: W1123 08:21:18.050937 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf641da98_25f1_4abc_80bc_0374d3e68666.slice/crio-fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826 WatchSource:0}: Error finding container fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826: Status 404 returned error can't find the container with id fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826 Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.407835 4988 generic.go:334] "Generic (PLEG): container finished" podID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerID="6c04aaf21d0216f6f27b0ad8fe623d539a0dd861624c1627444f2da3ffd63038" exitCode=0 Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.407954 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" event={"ID":"c49b62c3-be8d-4852-b42a-87aa039dbf10","Type":"ContainerDied","Data":"6c04aaf21d0216f6f27b0ad8fe623d539a0dd861624c1627444f2da3ffd63038"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.408440 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" event={"ID":"c49b62c3-be8d-4852-b42a-87aa039dbf10","Type":"ContainerStarted","Data":"931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.412246 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerStarted","Data":"a01b7b294bbfac40e8cfac7b744f1ad5a434e33219026cfbda3c142c02aa01e3"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.424643 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3","Type":"ContainerStarted","Data":"6e95b393849f9f6705f70924be9bebc601b926b0a18e17f924ec6e330e26b915"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.426868 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zj4m" event={"ID":"8d4d596f-639f-47c4-8785-6c96ce284f50","Type":"ContainerStarted","Data":"1361d33e575f9506111a31effd65ee2cbd71e6fc796ba6b580b67b61c04f3ff7"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.440348 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7tv69" event={"ID":"f641da98-25f1-4abc-80bc-0374d3e68666","Type":"ContainerStarted","Data":"af5b22c0d7f5479f42788a85e39297629e7de82fc3830c6e3a4b00b383c8a021"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.440396 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7tv69" event={"ID":"f641da98-25f1-4abc-80bc-0374d3e68666","Type":"ContainerStarted","Data":"fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.445158 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerStarted","Data":"cd59ac813854ece614df7f44c2e977719f2d7d35407f4fd2ec7e4289da6216ea"} Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.452973 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4zj4m" podStartSLOduration=2.45295011 podStartE2EDuration="2.45295011s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:18.448814909 +0000 UTC m=+5730.757327692" watchObservedRunningTime="2025-11-23 08:21:18.45295011 +0000 UTC m=+5730.761462873" Nov 23 08:21:18 crc kubenswrapper[4988]: I1123 08:21:18.473275 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7tv69" podStartSLOduration=1.473251925 podStartE2EDuration="1.473251925s" podCreationTimestamp="2025-11-23 08:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:18.467095275 +0000 UTC m=+5730.775608048" watchObservedRunningTime="2025-11-23 08:21:18.473251925 +0000 UTC m=+5730.781764688" Nov 23 08:21:19 crc kubenswrapper[4988]: I1123 08:21:19.466504 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" event={"ID":"c49b62c3-be8d-4852-b42a-87aa039dbf10","Type":"ContainerStarted","Data":"39bc3f3737af5376963346319441940e77494c946c081ececa8a85e6a81833a4"} Nov 23 08:21:19 crc kubenswrapper[4988]: I1123 08:21:19.494570 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" podStartSLOduration=3.494552902 podStartE2EDuration="3.494552902s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:19.489165541 +0000 UTC m=+5731.797678304" watchObservedRunningTime="2025-11-23 08:21:19.494552902 +0000 UTC m=+5731.803065665" Nov 23 08:21:20 crc kubenswrapper[4988]: I1123 08:21:20.371601 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:20 crc kubenswrapper[4988]: I1123 08:21:20.390818 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:20 crc kubenswrapper[4988]: I1123 08:21:20.473138 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.483816 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28ac509b-def3-41f0-9ae6-53bf183721ec","Type":"ContainerStarted","Data":"ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.483884 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="28ac509b-def3-41f0-9ae6-53bf183721ec" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d" gracePeriod=30 Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.487622 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerStarted","Data":"59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.487674 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerStarted","Data":"b3546375d08e4e5c0e47568011355ae39bf2d9d99933cc0827dd36dac4c67eab"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.487766 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-metadata" containerID="cri-o://59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a" gracePeriod=30 Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.487764 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-log" containerID="cri-o://b3546375d08e4e5c0e47568011355ae39bf2d9d99933cc0827dd36dac4c67eab" gracePeriod=30 Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.493866 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3","Type":"ContainerStarted","Data":"fd172833764f2175ea5002b94bac5d12a31b5c96a770662c2fa68a60bedb5160"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.497229 4988 generic.go:334] "Generic (PLEG): container finished" podID="f641da98-25f1-4abc-80bc-0374d3e68666" containerID="af5b22c0d7f5479f42788a85e39297629e7de82fc3830c6e3a4b00b383c8a021" exitCode=0 Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.497282 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7tv69" event={"ID":"f641da98-25f1-4abc-80bc-0374d3e68666","Type":"ContainerDied","Data":"af5b22c0d7f5479f42788a85e39297629e7de82fc3830c6e3a4b00b383c8a021"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.500986 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerStarted","Data":"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.501021 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerStarted","Data":"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9"} Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.510097 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.019898623 podStartE2EDuration="5.510079878s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="2025-11-23 08:21:17.205694388 +0000 UTC m=+5729.514207151" lastFinishedPulling="2025-11-23 08:21:20.695875643 +0000 UTC m=+5733.004388406" observedRunningTime="2025-11-23 08:21:21.510079908 +0000 UTC m=+5733.818592681" watchObservedRunningTime="2025-11-23 08:21:21.510079878 +0000 UTC m=+5733.818592641" Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.538823 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.51691523 podStartE2EDuration="5.538804199s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="2025-11-23 08:21:17.674420795 +0000 UTC m=+5729.982933558" lastFinishedPulling="2025-11-23 08:21:20.696309744 +0000 UTC m=+5733.004822527" observedRunningTime="2025-11-23 08:21:21.535353135 +0000 UTC m=+5733.843865908" watchObservedRunningTime="2025-11-23 08:21:21.538804199 +0000 UTC m=+5733.847316972" Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.559341 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.235518635 podStartE2EDuration="5.55932535s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="2025-11-23 08:21:17.370629523 +0000 UTC m=+5729.679142286" lastFinishedPulling="2025-11-23 08:21:20.694436238 +0000 UTC m=+5733.002949001" observedRunningTime="2025-11-23 08:21:21.552668458 +0000 UTC m=+5733.861181221" watchObservedRunningTime="2025-11-23 08:21:21.55932535 +0000 UTC m=+5733.867838113" Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.620506 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:21 crc kubenswrapper[4988]: I1123 08:21:21.720253 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.016972 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.017056 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.516544 4988 generic.go:334] "Generic (PLEG): container finished" podID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerID="b3546375d08e4e5c0e47568011355ae39bf2d9d99933cc0827dd36dac4c67eab" exitCode=143 Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.516712 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerDied","Data":"b3546375d08e4e5c0e47568011355ae39bf2d9d99933cc0827dd36dac4c67eab"} Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.933940 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:22 crc kubenswrapper[4988]: I1123 08:21:22.952934 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.120310499 podStartE2EDuration="6.9529029s" podCreationTimestamp="2025-11-23 08:21:16 +0000 UTC" firstStartedPulling="2025-11-23 08:21:17.8705642 +0000 UTC m=+5730.179076963" lastFinishedPulling="2025-11-23 08:21:20.703156601 +0000 UTC m=+5733.011669364" observedRunningTime="2025-11-23 08:21:21.603537969 +0000 UTC m=+5733.912050742" watchObservedRunningTime="2025-11-23 08:21:22.9529029 +0000 UTC m=+5735.261415703" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.082914 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts\") pod \"f641da98-25f1-4abc-80bc-0374d3e68666\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.083051 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle\") pod \"f641da98-25f1-4abc-80bc-0374d3e68666\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.083122 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data\") pod \"f641da98-25f1-4abc-80bc-0374d3e68666\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.084082 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m748z\" (UniqueName: \"kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z\") pod \"f641da98-25f1-4abc-80bc-0374d3e68666\" (UID: \"f641da98-25f1-4abc-80bc-0374d3e68666\") " Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.088571 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts" (OuterVolumeSpecName: "scripts") pod "f641da98-25f1-4abc-80bc-0374d3e68666" (UID: "f641da98-25f1-4abc-80bc-0374d3e68666"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.106463 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z" (OuterVolumeSpecName: "kube-api-access-m748z") pod "f641da98-25f1-4abc-80bc-0374d3e68666" (UID: "f641da98-25f1-4abc-80bc-0374d3e68666"). InnerVolumeSpecName "kube-api-access-m748z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.118227 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data" (OuterVolumeSpecName: "config-data") pod "f641da98-25f1-4abc-80bc-0374d3e68666" (UID: "f641da98-25f1-4abc-80bc-0374d3e68666"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.129743 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f641da98-25f1-4abc-80bc-0374d3e68666" (UID: "f641da98-25f1-4abc-80bc-0374d3e68666"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.187135 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.187425 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.187437 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f641da98-25f1-4abc-80bc-0374d3e68666-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.187447 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m748z\" (UniqueName: \"kubernetes.io/projected/f641da98-25f1-4abc-80bc-0374d3e68666-kube-api-access-m748z\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.530993 4988 generic.go:334] "Generic (PLEG): container finished" podID="8d4d596f-639f-47c4-8785-6c96ce284f50" containerID="1361d33e575f9506111a31effd65ee2cbd71e6fc796ba6b580b67b61c04f3ff7" exitCode=0 Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.531099 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zj4m" event={"ID":"8d4d596f-639f-47c4-8785-6c96ce284f50","Type":"ContainerDied","Data":"1361d33e575f9506111a31effd65ee2cbd71e6fc796ba6b580b67b61c04f3ff7"} Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.533454 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7tv69" event={"ID":"f641da98-25f1-4abc-80bc-0374d3e68666","Type":"ContainerDied","Data":"fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826"} Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.533519 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe428133804561ba0cdefe3031a012b098838b34d178a59e9da450ecd29e2826" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.533478 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7tv69" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.672562 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:21:23 crc kubenswrapper[4988]: E1123 08:21:23.673053 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f641da98-25f1-4abc-80bc-0374d3e68666" containerName="nova-cell1-conductor-db-sync" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.673077 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f641da98-25f1-4abc-80bc-0374d3e68666" containerName="nova-cell1-conductor-db-sync" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.673447 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f641da98-25f1-4abc-80bc-0374d3e68666" containerName="nova-cell1-conductor-db-sync" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.674173 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.680419 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.693659 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.817787 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.818043 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.818087 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whgq\" (UniqueName: \"kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.920423 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.920478 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whgq\" (UniqueName: \"kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.920727 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.926776 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.937165 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:23 crc kubenswrapper[4988]: I1123 08:21:23.937968 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whgq\" (UniqueName: \"kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq\") pod \"nova-cell1-conductor-0\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.009784 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.592620 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:21:24 crc kubenswrapper[4988]: W1123 08:21:24.598857 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3f67971_3923_4879_834e_66c6946e1b96.slice/crio-059a327ca9f3810e451fe37d8b90e0e162e76d041fc5bab1794fa73a839d4924 WatchSource:0}: Error finding container 059a327ca9f3810e451fe37d8b90e0e162e76d041fc5bab1794fa73a839d4924: Status 404 returned error can't find the container with id 059a327ca9f3810e451fe37d8b90e0e162e76d041fc5bab1794fa73a839d4924 Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.827547 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.943363 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk8h9\" (UniqueName: \"kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9\") pod \"8d4d596f-639f-47c4-8785-6c96ce284f50\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.944233 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts\") pod \"8d4d596f-639f-47c4-8785-6c96ce284f50\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.944324 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") pod \"8d4d596f-639f-47c4-8785-6c96ce284f50\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.944497 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle\") pod \"8d4d596f-639f-47c4-8785-6c96ce284f50\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.951440 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9" (OuterVolumeSpecName: "kube-api-access-qk8h9") pod "8d4d596f-639f-47c4-8785-6c96ce284f50" (UID: "8d4d596f-639f-47c4-8785-6c96ce284f50"). InnerVolumeSpecName "kube-api-access-qk8h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.951573 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts" (OuterVolumeSpecName: "scripts") pod "8d4d596f-639f-47c4-8785-6c96ce284f50" (UID: "8d4d596f-639f-47c4-8785-6c96ce284f50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:24 crc kubenswrapper[4988]: E1123 08:21:24.967075 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data podName:8d4d596f-639f-47c4-8785-6c96ce284f50 nodeName:}" failed. No retries permitted until 2025-11-23 08:21:25.467040083 +0000 UTC m=+5737.775552846 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data") pod "8d4d596f-639f-47c4-8785-6c96ce284f50" (UID: "8d4d596f-639f-47c4-8785-6c96ce284f50") : error deleting /var/lib/kubelet/pods/8d4d596f-639f-47c4-8785-6c96ce284f50/volume-subpaths: remove /var/lib/kubelet/pods/8d4d596f-639f-47c4-8785-6c96ce284f50/volume-subpaths: no such file or directory Nov 23 08:21:24 crc kubenswrapper[4988]: I1123 08:21:24.970226 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d4d596f-639f-47c4-8785-6c96ce284f50" (UID: "8d4d596f-639f-47c4-8785-6c96ce284f50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.047036 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.047073 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk8h9\" (UniqueName: \"kubernetes.io/projected/8d4d596f-639f-47c4-8785-6c96ce284f50-kube-api-access-qk8h9\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.047090 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.558291 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") pod \"8d4d596f-639f-47c4-8785-6c96ce284f50\" (UID: \"8d4d596f-639f-47c4-8785-6c96ce284f50\") " Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.565174 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4zj4m" event={"ID":"8d4d596f-639f-47c4-8785-6c96ce284f50","Type":"ContainerDied","Data":"8a94f2866b3a51d312a081d6f3a32254a05de504a542bc8375231161a0d74dc8"} Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.565277 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a94f2866b3a51d312a081d6f3a32254a05de504a542bc8375231161a0d74dc8" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.565272 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4zj4m" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.565806 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data" (OuterVolumeSpecName: "config-data") pod "8d4d596f-639f-47c4-8785-6c96ce284f50" (UID: "8d4d596f-639f-47c4-8785-6c96ce284f50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.572958 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f67971-3923-4879-834e-66c6946e1b96","Type":"ContainerStarted","Data":"deb82dab5b7054b5f9ee8d945f16bbea50ecd99b6a57bdfcf01cc339e8299035"} Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.573036 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f67971-3923-4879-834e-66c6946e1b96","Type":"ContainerStarted","Data":"059a327ca9f3810e451fe37d8b90e0e162e76d041fc5bab1794fa73a839d4924"} Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.573116 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.622755 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.62271684 podStartE2EDuration="2.62271684s" podCreationTimestamp="2025-11-23 08:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:25.606951115 +0000 UTC m=+5737.915463918" watchObservedRunningTime="2025-11-23 08:21:25.62271684 +0000 UTC m=+5737.931229633" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.663274 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d596f-639f-47c4-8785-6c96ce284f50-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.782571 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.783023 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-log" containerID="cri-o://a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" gracePeriod=30 Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.783925 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-api" containerID="cri-o://dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" gracePeriod=30 Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.796796 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:25 crc kubenswrapper[4988]: I1123 08:21:25.797174 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" containerName="nova-scheduler-scheduler" containerID="cri-o://fd172833764f2175ea5002b94bac5d12a31b5c96a770662c2fa68a60bedb5160" gracePeriod=30 Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.343546 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.385955 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle\") pod \"fbe259f8-4f37-416e-8700-361c86767339\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.386323 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs\") pod \"fbe259f8-4f37-416e-8700-361c86767339\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.387856 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs" (OuterVolumeSpecName: "logs") pod "fbe259f8-4f37-416e-8700-361c86767339" (UID: "fbe259f8-4f37-416e-8700-361c86767339"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.435794 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fbe259f8-4f37-416e-8700-361c86767339" (UID: "fbe259f8-4f37-416e-8700-361c86767339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.487905 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data\") pod \"fbe259f8-4f37-416e-8700-361c86767339\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.487994 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vrtj\" (UniqueName: \"kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj\") pod \"fbe259f8-4f37-416e-8700-361c86767339\" (UID: \"fbe259f8-4f37-416e-8700-361c86767339\") " Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.488733 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbe259f8-4f37-416e-8700-361c86767339-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.488783 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.492336 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj" (OuterVolumeSpecName: "kube-api-access-4vrtj") pod "fbe259f8-4f37-416e-8700-361c86767339" (UID: "fbe259f8-4f37-416e-8700-361c86767339"). InnerVolumeSpecName "kube-api-access-4vrtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.512641 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data" (OuterVolumeSpecName: "config-data") pod "fbe259f8-4f37-416e-8700-361c86767339" (UID: "fbe259f8-4f37-416e-8700-361c86767339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.586254 4988 generic.go:334] "Generic (PLEG): container finished" podID="fbe259f8-4f37-416e-8700-361c86767339" containerID="dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" exitCode=0 Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.586317 4988 generic.go:334] "Generic (PLEG): container finished" podID="fbe259f8-4f37-416e-8700-361c86767339" containerID="a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" exitCode=143 Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.588159 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.588409 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerDied","Data":"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51"} Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.588481 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerDied","Data":"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9"} Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.588496 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fbe259f8-4f37-416e-8700-361c86767339","Type":"ContainerDied","Data":"cd59ac813854ece614df7f44c2e977719f2d7d35407f4fd2ec7e4289da6216ea"} Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.588521 4988 scope.go:117] "RemoveContainer" containerID="dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.590576 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vrtj\" (UniqueName: \"kubernetes.io/projected/fbe259f8-4f37-416e-8700-361c86767339-kube-api-access-4vrtj\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.590605 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbe259f8-4f37-416e-8700-361c86767339-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.613603 4988 scope.go:117] "RemoveContainer" containerID="a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.638720 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.665239 4988 scope.go:117] "RemoveContainer" containerID="dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" Nov 23 08:21:26 crc kubenswrapper[4988]: E1123 08:21:26.669295 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51\": container with ID starting with dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51 not found: ID does not exist" containerID="dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.669327 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51"} err="failed to get container status \"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51\": rpc error: code = NotFound desc = could not find container \"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51\": container with ID starting with dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51 not found: ID does not exist" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.669351 4988 scope.go:117] "RemoveContainer" containerID="a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" Nov 23 08:21:26 crc kubenswrapper[4988]: E1123 08:21:26.675037 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9\": container with ID starting with a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9 not found: ID does not exist" containerID="a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.675079 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9"} err="failed to get container status \"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9\": rpc error: code = NotFound desc = could not find container \"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9\": container with ID starting with a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9 not found: ID does not exist" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.675110 4988 scope.go:117] "RemoveContainer" containerID="dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.677875 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51"} err="failed to get container status \"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51\": rpc error: code = NotFound desc = could not find container \"dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51\": container with ID starting with dd19773047b2642f629afe74aeadf7fb3712da65da23ad59e53fe10a19524d51 not found: ID does not exist" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.677893 4988 scope.go:117] "RemoveContainer" containerID="a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.679108 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9"} err="failed to get container status \"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9\": rpc error: code = NotFound desc = could not find container \"a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9\": container with ID starting with a2e6b5415fc9e0d27979e8a6bea1d5a8c525da3cac1c91fb60e6420084858fe9 not found: ID does not exist" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.684346 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.694255 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:26 crc kubenswrapper[4988]: E1123 08:21:26.694850 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-api" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.694885 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-api" Nov 23 08:21:26 crc kubenswrapper[4988]: E1123 08:21:26.694941 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4d596f-639f-47c4-8785-6c96ce284f50" containerName="nova-manage" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.694956 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4d596f-639f-47c4-8785-6c96ce284f50" containerName="nova-manage" Nov 23 08:21:26 crc kubenswrapper[4988]: E1123 08:21:26.694987 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-log" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.695002 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-log" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.695381 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-log" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.695418 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d596f-639f-47c4-8785-6c96ce284f50" containerName="nova-manage" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.695453 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe259f8-4f37-416e-8700-361c86767339" containerName="nova-api-api" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.697571 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.700756 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.709142 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.897155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.897396 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.897582 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:26 crc kubenswrapper[4988]: I1123 08:21:26.897927 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmtd\" (UniqueName: \"kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.002097 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.002220 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.002336 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmtd\" (UniqueName: \"kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.002454 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.002990 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.007447 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.014324 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.035705 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmtd\" (UniqueName: \"kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd\") pod \"nova-api-0\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.036537 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.130798 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.131102 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="dnsmasq-dns" containerID="cri-o://603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5" gracePeriod=10 Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.327092 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.501034 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.599097 4988 generic.go:334] "Generic (PLEG): container finished" podID="34928546-bc40-4d88-ba37-a5afad947c49" containerID="603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5" exitCode=0 Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.599406 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" event={"ID":"34928546-bc40-4d88-ba37-a5afad947c49","Type":"ContainerDied","Data":"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5"} Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.599455 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" event={"ID":"34928546-bc40-4d88-ba37-a5afad947c49","Type":"ContainerDied","Data":"fbd5ce94a70cae232cc7f950be9774d25eeef555bf1cb0c748bb3e36353cb244"} Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.599481 4988 scope.go:117] "RemoveContainer" containerID="603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.599695 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65495f77b5-pzchj" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.615526 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc\") pod \"34928546-bc40-4d88-ba37-a5afad947c49\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.615756 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb\") pod \"34928546-bc40-4d88-ba37-a5afad947c49\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.616813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbtbw\" (UniqueName: \"kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw\") pod \"34928546-bc40-4d88-ba37-a5afad947c49\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.616966 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb\") pod \"34928546-bc40-4d88-ba37-a5afad947c49\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.617029 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config\") pod \"34928546-bc40-4d88-ba37-a5afad947c49\" (UID: \"34928546-bc40-4d88-ba37-a5afad947c49\") " Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.629577 4988 scope.go:117] "RemoveContainer" containerID="992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.639400 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw" (OuterVolumeSpecName: "kube-api-access-vbtbw") pod "34928546-bc40-4d88-ba37-a5afad947c49" (UID: "34928546-bc40-4d88-ba37-a5afad947c49"). InnerVolumeSpecName "kube-api-access-vbtbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.653500 4988 scope.go:117] "RemoveContainer" containerID="603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5" Nov 23 08:21:27 crc kubenswrapper[4988]: E1123 08:21:27.654259 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5\": container with ID starting with 603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5 not found: ID does not exist" containerID="603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.654300 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5"} err="failed to get container status \"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5\": rpc error: code = NotFound desc = could not find container \"603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5\": container with ID starting with 603ab54341d4d66fd7f3e9dd15614df64be9e026fe2f122f9cddfe0a3f552cd5 not found: ID does not exist" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.654327 4988 scope.go:117] "RemoveContainer" containerID="992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83" Nov 23 08:21:27 crc kubenswrapper[4988]: E1123 08:21:27.654722 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83\": container with ID starting with 992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83 not found: ID does not exist" containerID="992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.654765 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83"} err="failed to get container status \"992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83\": rpc error: code = NotFound desc = could not find container \"992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83\": container with ID starting with 992d823c9c7b55c47b7347dfff131715b1c3393187619b8f2fd4b7eac024eb83 not found: ID does not exist" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.678565 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34928546-bc40-4d88-ba37-a5afad947c49" (UID: "34928546-bc40-4d88-ba37-a5afad947c49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.691633 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config" (OuterVolumeSpecName: "config") pod "34928546-bc40-4d88-ba37-a5afad947c49" (UID: "34928546-bc40-4d88-ba37-a5afad947c49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.694403 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34928546-bc40-4d88-ba37-a5afad947c49" (UID: "34928546-bc40-4d88-ba37-a5afad947c49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.696645 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34928546-bc40-4d88-ba37-a5afad947c49" (UID: "34928546-bc40-4d88-ba37-a5afad947c49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.720980 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.721055 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.721065 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.721074 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34928546-bc40-4d88-ba37-a5afad947c49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.721086 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbtbw\" (UniqueName: \"kubernetes.io/projected/34928546-bc40-4d88-ba37-a5afad947c49-kube-api-access-vbtbw\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.816274 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.960336 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:21:27 crc kubenswrapper[4988]: I1123 08:21:27.968565 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65495f77b5-pzchj"] Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.513165 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34928546-bc40-4d88-ba37-a5afad947c49" path="/var/lib/kubelet/pods/34928546-bc40-4d88-ba37-a5afad947c49/volumes" Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.514969 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe259f8-4f37-416e-8700-361c86767339" path="/var/lib/kubelet/pods/fbe259f8-4f37-416e-8700-361c86767339/volumes" Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.620063 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerStarted","Data":"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39"} Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.620106 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerStarted","Data":"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b"} Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.620124 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerStarted","Data":"5d6b78f2bb43a9602a3e2ed71ef44f90c3f8a5bea9d1d9d1c9484ed2fb78d2af"} Nov 23 08:21:28 crc kubenswrapper[4988]: I1123 08:21:28.641426 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.641403631 podStartE2EDuration="2.641403631s" podCreationTimestamp="2025-11-23 08:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:28.635503607 +0000 UTC m=+5740.944016410" watchObservedRunningTime="2025-11-23 08:21:28.641403631 +0000 UTC m=+5740.949916404" Nov 23 08:21:29 crc kubenswrapper[4988]: I1123 08:21:29.063982 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 08:21:37 crc kubenswrapper[4988]: I1123 08:21:37.328251 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:21:37 crc kubenswrapper[4988]: I1123 08:21:37.328895 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:21:38 crc kubenswrapper[4988]: I1123 08:21:38.410443 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:38 crc kubenswrapper[4988]: I1123 08:21:38.410430 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:48 crc kubenswrapper[4988]: I1123 08:21:48.411427 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:48 crc kubenswrapper[4988]: I1123 08:21:48.411482 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.067227 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fa70-account-create-hdw5g"] Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.084061 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-t4px9"] Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.094494 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fa70-account-create-hdw5g"] Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.103947 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-t4px9"] Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.509016 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2364b99b-fe19-46f3-abdc-d5f0cc41b80f" path="/var/lib/kubelet/pods/2364b99b-fe19-46f3-abdc-d5f0cc41b80f/volumes" Nov 23 08:21:50 crc kubenswrapper[4988]: I1123 08:21:50.509910 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd8b350-8faf-4ee1-acc5-d550e98f0a3e" path="/var/lib/kubelet/pods/ebd8b350-8faf-4ee1-acc5-d550e98f0a3e/volumes" Nov 23 08:21:51 crc kubenswrapper[4988]: E1123 08:21:51.813165 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ac509b_def3_41f0_9ae6_53bf183721ec.slice/crio-ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ac509b_def3_41f0_9ae6_53bf183721ec.slice/crio-conmon-ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda115f463_d12d_4d94_9b97_e4982fe02b03.slice/crio-conmon-59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.903212 4988 generic.go:334] "Generic (PLEG): container finished" podID="28ac509b-def3-41f0-9ae6-53bf183721ec" containerID="ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d" exitCode=137 Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.903597 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28ac509b-def3-41f0-9ae6-53bf183721ec","Type":"ContainerDied","Data":"ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d"} Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.903696 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28ac509b-def3-41f0-9ae6-53bf183721ec","Type":"ContainerDied","Data":"734fc4c7c53a23c36ce30033e48b4806da5aed9f5e1718329dc248971f5a023c"} Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.903761 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734fc4c7c53a23c36ce30033e48b4806da5aed9f5e1718329dc248971f5a023c" Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.905513 4988 generic.go:334] "Generic (PLEG): container finished" podID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerID="59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a" exitCode=137 Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.905589 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerDied","Data":"59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a"} Nov 23 08:21:51 crc kubenswrapper[4988]: I1123 08:21:51.995352 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.003156 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065661 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle\") pod \"a115f463-d12d-4d94-9b97-e4982fe02b03\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065760 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data\") pod \"28ac509b-def3-41f0-9ae6-53bf183721ec\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065812 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdnsq\" (UniqueName: \"kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq\") pod \"a115f463-d12d-4d94-9b97-e4982fe02b03\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065877 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs\") pod \"a115f463-d12d-4d94-9b97-e4982fe02b03\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065899 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqqg\" (UniqueName: \"kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg\") pod \"28ac509b-def3-41f0-9ae6-53bf183721ec\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065927 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data\") pod \"a115f463-d12d-4d94-9b97-e4982fe02b03\" (UID: \"a115f463-d12d-4d94-9b97-e4982fe02b03\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.065959 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle\") pod \"28ac509b-def3-41f0-9ae6-53bf183721ec\" (UID: \"28ac509b-def3-41f0-9ae6-53bf183721ec\") " Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.066499 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs" (OuterVolumeSpecName: "logs") pod "a115f463-d12d-4d94-9b97-e4982fe02b03" (UID: "a115f463-d12d-4d94-9b97-e4982fe02b03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.071236 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq" (OuterVolumeSpecName: "kube-api-access-pdnsq") pod "a115f463-d12d-4d94-9b97-e4982fe02b03" (UID: "a115f463-d12d-4d94-9b97-e4982fe02b03"). InnerVolumeSpecName "kube-api-access-pdnsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.072693 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg" (OuterVolumeSpecName: "kube-api-access-4tqqg") pod "28ac509b-def3-41f0-9ae6-53bf183721ec" (UID: "28ac509b-def3-41f0-9ae6-53bf183721ec"). InnerVolumeSpecName "kube-api-access-4tqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.093272 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data" (OuterVolumeSpecName: "config-data") pod "a115f463-d12d-4d94-9b97-e4982fe02b03" (UID: "a115f463-d12d-4d94-9b97-e4982fe02b03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.094815 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data" (OuterVolumeSpecName: "config-data") pod "28ac509b-def3-41f0-9ae6-53bf183721ec" (UID: "28ac509b-def3-41f0-9ae6-53bf183721ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.097364 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a115f463-d12d-4d94-9b97-e4982fe02b03" (UID: "a115f463-d12d-4d94-9b97-e4982fe02b03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.100418 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ac509b-def3-41f0-9ae6-53bf183721ec" (UID: "28ac509b-def3-41f0-9ae6-53bf183721ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.167952 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.167987 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.167998 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdnsq\" (UniqueName: \"kubernetes.io/projected/a115f463-d12d-4d94-9b97-e4982fe02b03-kube-api-access-pdnsq\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.168010 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a115f463-d12d-4d94-9b97-e4982fe02b03-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.168019 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tqqg\" (UniqueName: \"kubernetes.io/projected/28ac509b-def3-41f0-9ae6-53bf183721ec-kube-api-access-4tqqg\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.168027 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a115f463-d12d-4d94-9b97-e4982fe02b03-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.168037 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ac509b-def3-41f0-9ae6-53bf183721ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.921712 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.922482 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a115f463-d12d-4d94-9b97-e4982fe02b03","Type":"ContainerDied","Data":"a01b7b294bbfac40e8cfac7b744f1ad5a434e33219026cfbda3c142c02aa01e3"} Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.922556 4988 scope.go:117] "RemoveContainer" containerID="59cfd3129bdac1daccc2e5ad07beaf0939fd8f57d21188bf050f6f9867d4918a" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.922590 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:52 crc kubenswrapper[4988]: I1123 08:21:52.965820 4988 scope.go:117] "RemoveContainer" containerID="b3546375d08e4e5c0e47568011355ae39bf2d9d99933cc0827dd36dac4c67eab" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.005303 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.026411 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.039018 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.055609 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.060276 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: E1123 08:21:53.060848 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="dnsmasq-dns" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.060876 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="dnsmasq-dns" Nov 23 08:21:53 crc kubenswrapper[4988]: E1123 08:21:53.060913 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="init" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.060925 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="init" Nov 23 08:21:53 crc kubenswrapper[4988]: E1123 08:21:53.060941 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ac509b-def3-41f0-9ae6-53bf183721ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.060950 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ac509b-def3-41f0-9ae6-53bf183721ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:21:53 crc kubenswrapper[4988]: E1123 08:21:53.060972 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-metadata" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.060983 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-metadata" Nov 23 08:21:53 crc kubenswrapper[4988]: E1123 08:21:53.061012 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-log" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.061024 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-log" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.061337 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ac509b-def3-41f0-9ae6-53bf183721ec" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.061373 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-metadata" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.061398 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" containerName="nova-metadata-log" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.061415 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="34928546-bc40-4d88-ba37-a5afad947c49" containerName="dnsmasq-dns" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.066953 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.068911 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.069572 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.074460 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.075502 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.078930 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.078973 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.079112 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.086892 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.086957 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrw5n\" (UniqueName: \"kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.086999 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.087059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.087093 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.089514 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.097400 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188412 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188513 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz6l\" (UniqueName: \"kubernetes.io/projected/8ba70dda-e08f-4b54-8536-9652905f571b-kube-api-access-4wz6l\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188670 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188806 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188835 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188939 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188971 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.188996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.189117 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrw5n\" (UniqueName: \"kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.189265 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.194294 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.194308 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.202035 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.210461 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrw5n\" (UniqueName: \"kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n\") pod \"nova-metadata-0\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.291235 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.291388 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.291550 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.291629 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wz6l\" (UniqueName: \"kubernetes.io/projected/8ba70dda-e08f-4b54-8536-9652905f571b-kube-api-access-4wz6l\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.291739 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.296427 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.296635 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.298118 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.300697 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ba70dda-e08f-4b54-8536-9652905f571b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.312870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wz6l\" (UniqueName: \"kubernetes.io/projected/8ba70dda-e08f-4b54-8536-9652905f571b-kube-api-access-4wz6l\") pod \"nova-cell1-novncproxy-0\" (UID: \"8ba70dda-e08f-4b54-8536-9652905f571b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.385864 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.394885 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:53 crc kubenswrapper[4988]: W1123 08:21:53.948877 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba70dda_e08f_4b54_8536_9652905f571b.slice/crio-e3cac109851b89ae30f52706cff74e0cabfbff60c591730e38efeb61aef2ae33 WatchSource:0}: Error finding container e3cac109851b89ae30f52706cff74e0cabfbff60c591730e38efeb61aef2ae33: Status 404 returned error can't find the container with id e3cac109851b89ae30f52706cff74e0cabfbff60c591730e38efeb61aef2ae33 Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.951433 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:21:53 crc kubenswrapper[4988]: W1123 08:21:53.963350 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462112ad_1e32_4a30_9daf_f31f47faa8ac.slice/crio-e2ae3097e31aade85d18ba321204eb037e9e3887e2545959943e1f944205ad8e WatchSource:0}: Error finding container e2ae3097e31aade85d18ba321204eb037e9e3887e2545959943e1f944205ad8e: Status 404 returned error can't find the container with id e2ae3097e31aade85d18ba321204eb037e9e3887e2545959943e1f944205ad8e Nov 23 08:21:53 crc kubenswrapper[4988]: I1123 08:21:53.964091 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.515184 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ac509b-def3-41f0-9ae6-53bf183721ec" path="/var/lib/kubelet/pods/28ac509b-def3-41f0-9ae6-53bf183721ec/volumes" Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.517824 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a115f463-d12d-4d94-9b97-e4982fe02b03" path="/var/lib/kubelet/pods/a115f463-d12d-4d94-9b97-e4982fe02b03/volumes" Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.964109 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerStarted","Data":"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119"} Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.964158 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerStarted","Data":"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee"} Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.964169 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerStarted","Data":"e2ae3097e31aade85d18ba321204eb037e9e3887e2545959943e1f944205ad8e"} Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.973904 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8ba70dda-e08f-4b54-8536-9652905f571b","Type":"ContainerStarted","Data":"9f606a3b5d7e05c18ef6b2a0db33a443fb251eb612bf2dfbf7c995de302afc6d"} Nov 23 08:21:54 crc kubenswrapper[4988]: I1123 08:21:54.973984 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8ba70dda-e08f-4b54-8536-9652905f571b","Type":"ContainerStarted","Data":"e3cac109851b89ae30f52706cff74e0cabfbff60c591730e38efeb61aef2ae33"} Nov 23 08:21:55 crc kubenswrapper[4988]: I1123 08:21:55.011972 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.011942446 podStartE2EDuration="3.011942446s" podCreationTimestamp="2025-11-23 08:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:54.999226336 +0000 UTC m=+5767.307739119" watchObservedRunningTime="2025-11-23 08:21:55.011942446 +0000 UTC m=+5767.320455239" Nov 23 08:21:55 crc kubenswrapper[4988]: I1123 08:21:55.031358 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.031341839 podStartE2EDuration="3.031341839s" podCreationTimestamp="2025-11-23 08:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:55.027741011 +0000 UTC m=+5767.336253804" watchObservedRunningTime="2025-11-23 08:21:55.031341839 +0000 UTC m=+5767.339854602" Nov 23 08:21:55 crc kubenswrapper[4988]: I1123 08:21:55.984680 4988 generic.go:334] "Generic (PLEG): container finished" podID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" containerID="fd172833764f2175ea5002b94bac5d12a31b5c96a770662c2fa68a60bedb5160" exitCode=137 Nov 23 08:21:55 crc kubenswrapper[4988]: I1123 08:21:55.984738 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3","Type":"ContainerDied","Data":"fd172833764f2175ea5002b94bac5d12a31b5c96a770662c2fa68a60bedb5160"} Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.193317 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.261593 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data\") pod \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.261725 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgw6\" (UniqueName: \"kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6\") pod \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.261785 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle\") pod \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\" (UID: \"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3\") " Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.268300 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6" (OuterVolumeSpecName: "kube-api-access-pdgw6") pod "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" (UID: "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3"). InnerVolumeSpecName "kube-api-access-pdgw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.289516 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data" (OuterVolumeSpecName: "config-data") pod "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" (UID: "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.291483 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" (UID: "9c73840a-5954-4d7d-8b9c-d1bc99ae90b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.364074 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.364118 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgw6\" (UniqueName: \"kubernetes.io/projected/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-kube-api-access-pdgw6\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.364131 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.995080 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c73840a-5954-4d7d-8b9c-d1bc99ae90b3","Type":"ContainerDied","Data":"6e95b393849f9f6705f70924be9bebc601b926b0a18e17f924ec6e330e26b915"} Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.995503 4988 scope.go:117] "RemoveContainer" containerID="fd172833764f2175ea5002b94bac5d12a31b5c96a770662c2fa68a60bedb5160" Nov 23 08:21:56 crc kubenswrapper[4988]: I1123 08:21:56.995526 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.020924 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.028674 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.047103 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:57 crc kubenswrapper[4988]: E1123 08:21:57.047709 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" containerName="nova-scheduler-scheduler" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.047739 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" containerName="nova-scheduler-scheduler" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.048030 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" containerName="nova-scheduler-scheduler" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.048909 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.051019 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.059306 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.077962 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.078080 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brtd\" (UniqueName: \"kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.078255 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.179911 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brtd\" (UniqueName: \"kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.180019 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.180073 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.184256 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.184323 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.200389 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brtd\" (UniqueName: \"kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd\") pod \"nova-scheduler-0\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.327318 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.327662 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.427461 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:21:57 crc kubenswrapper[4988]: I1123 08:21:57.736500 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:21:57 crc kubenswrapper[4988]: W1123 08:21:57.745650 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9925dff7_e396_49c7_8c34_5dab620f6d4a.slice/crio-781c1622c0d60fb50899f4969f707e30a2ff897b53547920ad83afa33e83f65c WatchSource:0}: Error finding container 781c1622c0d60fb50899f4969f707e30a2ff897b53547920ad83afa33e83f65c: Status 404 returned error can't find the container with id 781c1622c0d60fb50899f4969f707e30a2ff897b53547920ad83afa33e83f65c Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.009641 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9925dff7-e396-49c7-8c34-5dab620f6d4a","Type":"ContainerStarted","Data":"1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14"} Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.010214 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9925dff7-e396-49c7-8c34-5dab620f6d4a","Type":"ContainerStarted","Data":"781c1622c0d60fb50899f4969f707e30a2ff897b53547920ad83afa33e83f65c"} Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.039225 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.039172646 podStartE2EDuration="1.039172646s" podCreationTimestamp="2025-11-23 08:21:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:21:58.028842974 +0000 UTC m=+5770.337355737" watchObservedRunningTime="2025-11-23 08:21:58.039172646 +0000 UTC m=+5770.347685419" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.386408 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.386481 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.395208 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.409901 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.409944 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.430571 4988 scope.go:117] "RemoveContainer" containerID="5f7f84fe227043860f32b575baf5b2b744fba16189ae429bba94df3da9925f5b" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.465190 4988 scope.go:117] "RemoveContainer" containerID="79a7db4bee8568ccf56583cd5e14f33e3659b45e6b2626bcdf43b91d71148a26" Nov 23 08:21:58 crc kubenswrapper[4988]: I1123 08:21:58.518731 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c73840a-5954-4d7d-8b9c-d1bc99ae90b3" path="/var/lib/kubelet/pods/9c73840a-5954-4d7d-8b9c-d1bc99ae90b3/volumes" Nov 23 08:22:01 crc kubenswrapper[4988]: I1123 08:22:01.036952 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-bsjwl"] Nov 23 08:22:01 crc kubenswrapper[4988]: I1123 08:22:01.056319 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-bsjwl"] Nov 23 08:22:02 crc kubenswrapper[4988]: I1123 08:22:02.428522 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:22:02 crc kubenswrapper[4988]: I1123 08:22:02.515876 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67755350-66e6-4df3-8736-8761deed3ea7" path="/var/lib/kubelet/pods/67755350-66e6-4df3-8736-8761deed3ea7/volumes" Nov 23 08:22:03 crc kubenswrapper[4988]: I1123 08:22:03.386282 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:22:03 crc kubenswrapper[4988]: I1123 08:22:03.386346 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:22:03 crc kubenswrapper[4988]: I1123 08:22:03.395829 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:22:03 crc kubenswrapper[4988]: I1123 08:22:03.417358 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.103276 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.345386 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zrmnq"] Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.347075 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.354270 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.354622 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.361120 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zrmnq"] Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.398291 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.398530 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.83:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.477048 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.477125 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.477260 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltxw\" (UniqueName: \"kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.477307 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.582681 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.582978 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltxw\" (UniqueName: \"kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.583048 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.583248 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.589520 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.590041 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.591630 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.608276 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nltxw\" (UniqueName: \"kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw\") pod \"nova-cell1-cell-mapping-zrmnq\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:04 crc kubenswrapper[4988]: I1123 08:22:04.680457 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:05 crc kubenswrapper[4988]: I1123 08:22:05.222866 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zrmnq"] Nov 23 08:22:06 crc kubenswrapper[4988]: I1123 08:22:06.104297 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zrmnq" event={"ID":"82b3ee4a-8bc5-4cf2-a345-c47267451d60","Type":"ContainerStarted","Data":"e125af4531b7c5e66435f3cac544e62e8a97e4f2d72067750ef3fdab19958b22"} Nov 23 08:22:06 crc kubenswrapper[4988]: I1123 08:22:06.104347 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zrmnq" event={"ID":"82b3ee4a-8bc5-4cf2-a345-c47267451d60","Type":"ContainerStarted","Data":"385dea6d2afd747fdba60bca41f19bf6b3dfd98d3dc4cefafbc8ecbb10cc8d58"} Nov 23 08:22:06 crc kubenswrapper[4988]: I1123 08:22:06.126347 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zrmnq" podStartSLOduration=2.126328039 podStartE2EDuration="2.126328039s" podCreationTimestamp="2025-11-23 08:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:06.124770861 +0000 UTC m=+5778.433283664" watchObservedRunningTime="2025-11-23 08:22:06.126328039 +0000 UTC m=+5778.434840802" Nov 23 08:22:07 crc kubenswrapper[4988]: I1123 08:22:07.427905 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 08:22:07 crc kubenswrapper[4988]: I1123 08:22:07.463038 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 08:22:08 crc kubenswrapper[4988]: I1123 08:22:08.158694 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 08:22:08 crc kubenswrapper[4988]: I1123 08:22:08.409705 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:08 crc kubenswrapper[4988]: I1123 08:22:08.410136 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.82:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:10 crc kubenswrapper[4988]: I1123 08:22:10.150826 4988 generic.go:334] "Generic (PLEG): container finished" podID="82b3ee4a-8bc5-4cf2-a345-c47267451d60" containerID="e125af4531b7c5e66435f3cac544e62e8a97e4f2d72067750ef3fdab19958b22" exitCode=0 Nov 23 08:22:10 crc kubenswrapper[4988]: I1123 08:22:10.150902 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zrmnq" event={"ID":"82b3ee4a-8bc5-4cf2-a345-c47267451d60","Type":"ContainerDied","Data":"e125af4531b7c5e66435f3cac544e62e8a97e4f2d72067750ef3fdab19958b22"} Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.457241 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.536798 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nltxw\" (UniqueName: \"kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw\") pod \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.537287 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle\") pod \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.537607 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts\") pod \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.537860 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data\") pod \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\" (UID: \"82b3ee4a-8bc5-4cf2-a345-c47267451d60\") " Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.542623 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts" (OuterVolumeSpecName: "scripts") pod "82b3ee4a-8bc5-4cf2-a345-c47267451d60" (UID: "82b3ee4a-8bc5-4cf2-a345-c47267451d60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.550083 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw" (OuterVolumeSpecName: "kube-api-access-nltxw") pod "82b3ee4a-8bc5-4cf2-a345-c47267451d60" (UID: "82b3ee4a-8bc5-4cf2-a345-c47267451d60"). InnerVolumeSpecName "kube-api-access-nltxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.565081 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82b3ee4a-8bc5-4cf2-a345-c47267451d60" (UID: "82b3ee4a-8bc5-4cf2-a345-c47267451d60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.576588 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data" (OuterVolumeSpecName: "config-data") pod "82b3ee4a-8bc5-4cf2-a345-c47267451d60" (UID: "82b3ee4a-8bc5-4cf2-a345-c47267451d60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.640590 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.640624 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.640637 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltxw\" (UniqueName: \"kubernetes.io/projected/82b3ee4a-8bc5-4cf2-a345-c47267451d60-kube-api-access-nltxw\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:11 crc kubenswrapper[4988]: I1123 08:22:11.640649 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b3ee4a-8bc5-4cf2-a345-c47267451d60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.174664 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zrmnq" event={"ID":"82b3ee4a-8bc5-4cf2-a345-c47267451d60","Type":"ContainerDied","Data":"385dea6d2afd747fdba60bca41f19bf6b3dfd98d3dc4cefafbc8ecbb10cc8d58"} Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.175106 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385dea6d2afd747fdba60bca41f19bf6b3dfd98d3dc4cefafbc8ecbb10cc8d58" Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.174753 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zrmnq" Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.365317 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.365568 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" containerID="cri-o://782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b" gracePeriod=30 Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.365648 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" containerID="cri-o://287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39" gracePeriod=30 Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.383381 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.383770 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" containerID="cri-o://1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" gracePeriod=30 Nov 23 08:22:12 crc kubenswrapper[4988]: E1123 08:22:12.429513 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:12 crc kubenswrapper[4988]: E1123 08:22:12.430972 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:12 crc kubenswrapper[4988]: E1123 08:22:12.435613 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:12 crc kubenswrapper[4988]: E1123 08:22:12.435692 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.446986 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.447463 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-log" containerID="cri-o://40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee" gracePeriod=30 Nov 23 08:22:12 crc kubenswrapper[4988]: I1123 08:22:12.447917 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-metadata" containerID="cri-o://c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119" gracePeriod=30 Nov 23 08:22:13 crc kubenswrapper[4988]: I1123 08:22:13.185570 4988 generic.go:334] "Generic (PLEG): container finished" podID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerID="40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee" exitCode=143 Nov 23 08:22:13 crc kubenswrapper[4988]: I1123 08:22:13.185621 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerDied","Data":"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee"} Nov 23 08:22:13 crc kubenswrapper[4988]: I1123 08:22:13.187774 4988 generic.go:334] "Generic (PLEG): container finished" podID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerID="782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b" exitCode=143 Nov 23 08:22:13 crc kubenswrapper[4988]: I1123 08:22:13.187809 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerDied","Data":"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b"} Nov 23 08:22:15 crc kubenswrapper[4988]: I1123 08:22:15.054719 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nvkdt"] Nov 23 08:22:15 crc kubenswrapper[4988]: I1123 08:22:15.065329 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nvkdt"] Nov 23 08:22:16 crc kubenswrapper[4988]: I1123 08:22:16.510485 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe7b079-462b-435a-8066-4a37543eef94" path="/var/lib/kubelet/pods/efe7b079-462b-435a-8066-4a37543eef94/volumes" Nov 23 08:22:17 crc kubenswrapper[4988]: E1123 08:22:17.430095 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:17 crc kubenswrapper[4988]: E1123 08:22:17.432536 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:17 crc kubenswrapper[4988]: E1123 08:22:17.434426 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:17 crc kubenswrapper[4988]: E1123 08:22:17.434483 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:21 crc kubenswrapper[4988]: I1123 08:22:21.673017 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:22:21 crc kubenswrapper[4988]: I1123 08:22:21.673652 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:22:22 crc kubenswrapper[4988]: E1123 08:22:22.431032 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:22 crc kubenswrapper[4988]: E1123 08:22:22.433496 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:22 crc kubenswrapper[4988]: E1123 08:22:22.435453 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:22 crc kubenswrapper[4988]: E1123 08:22:22.435654 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.223793 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.292236 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle\") pod \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.292410 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwmtd\" (UniqueName: \"kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd\") pod \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.292468 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs\") pod \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.292493 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data\") pod \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\" (UID: \"15d365f8-b00a-4a1d-b3e8-d467fda3c271\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.293179 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs" (OuterVolumeSpecName: "logs") pod "15d365f8-b00a-4a1d-b3e8-d467fda3c271" (UID: "15d365f8-b00a-4a1d-b3e8-d467fda3c271"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.298211 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd" (OuterVolumeSpecName: "kube-api-access-wwmtd") pod "15d365f8-b00a-4a1d-b3e8-d467fda3c271" (UID: "15d365f8-b00a-4a1d-b3e8-d467fda3c271"). InnerVolumeSpecName "kube-api-access-wwmtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.309600 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.321833 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data" (OuterVolumeSpecName: "config-data") pod "15d365f8-b00a-4a1d-b3e8-d467fda3c271" (UID: "15d365f8-b00a-4a1d-b3e8-d467fda3c271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.325674 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15d365f8-b00a-4a1d-b3e8-d467fda3c271" (UID: "15d365f8-b00a-4a1d-b3e8-d467fda3c271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.394548 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle\") pod \"462112ad-1e32-4a30-9daf-f31f47faa8ac\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.394665 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs\") pod \"462112ad-1e32-4a30-9daf-f31f47faa8ac\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.394831 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrw5n\" (UniqueName: \"kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n\") pod \"462112ad-1e32-4a30-9daf-f31f47faa8ac\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.394916 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data\") pod \"462112ad-1e32-4a30-9daf-f31f47faa8ac\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.395301 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs\") pod \"462112ad-1e32-4a30-9daf-f31f47faa8ac\" (UID: \"462112ad-1e32-4a30-9daf-f31f47faa8ac\") " Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.395787 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs" (OuterVolumeSpecName: "logs") pod "462112ad-1e32-4a30-9daf-f31f47faa8ac" (UID: "462112ad-1e32-4a30-9daf-f31f47faa8ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.396053 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwmtd\" (UniqueName: \"kubernetes.io/projected/15d365f8-b00a-4a1d-b3e8-d467fda3c271-kube-api-access-wwmtd\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.396093 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15d365f8-b00a-4a1d-b3e8-d467fda3c271-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.396104 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.396114 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462112ad-1e32-4a30-9daf-f31f47faa8ac-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.396122 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d365f8-b00a-4a1d-b3e8-d467fda3c271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.397306 4988 generic.go:334] "Generic (PLEG): container finished" podID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerID="c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119" exitCode=0 Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.397355 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.397412 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerDied","Data":"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119"} Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.397458 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"462112ad-1e32-4a30-9daf-f31f47faa8ac","Type":"ContainerDied","Data":"e2ae3097e31aade85d18ba321204eb037e9e3887e2545959943e1f944205ad8e"} Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.397477 4988 scope.go:117] "RemoveContainer" containerID="c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.398160 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n" (OuterVolumeSpecName: "kube-api-access-hrw5n") pod "462112ad-1e32-4a30-9daf-f31f47faa8ac" (UID: "462112ad-1e32-4a30-9daf-f31f47faa8ac"). InnerVolumeSpecName "kube-api-access-hrw5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.403968 4988 generic.go:334] "Generic (PLEG): container finished" podID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerID="287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39" exitCode=0 Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.404003 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerDied","Data":"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39"} Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.404028 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"15d365f8-b00a-4a1d-b3e8-d467fda3c271","Type":"ContainerDied","Data":"5d6b78f2bb43a9602a3e2ed71ef44f90c3f8a5bea9d1d9d1c9484ed2fb78d2af"} Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.404077 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.424370 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462112ad-1e32-4a30-9daf-f31f47faa8ac" (UID: "462112ad-1e32-4a30-9daf-f31f47faa8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.437296 4988 scope.go:117] "RemoveContainer" containerID="40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.443523 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.445769 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data" (OuterVolumeSpecName: "config-data") pod "462112ad-1e32-4a30-9daf-f31f47faa8ac" (UID: "462112ad-1e32-4a30-9daf-f31f47faa8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.464288 4988 scope.go:117] "RemoveContainer" containerID="c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.464706 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119\": container with ID starting with c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119 not found: ID does not exist" containerID="c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.464750 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119"} err="failed to get container status \"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119\": rpc error: code = NotFound desc = could not find container \"c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119\": container with ID starting with c4bb97ab48eca3cce57fc327d555022d8942e85ecf6853206ec9e7aa92b65119 not found: ID does not exist" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.464777 4988 scope.go:117] "RemoveContainer" containerID="40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.465050 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee\": container with ID starting with 40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee not found: ID does not exist" containerID="40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.465077 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee"} err="failed to get container status \"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee\": rpc error: code = NotFound desc = could not find container \"40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee\": container with ID starting with 40248669730fe6b013790c8a4451f2a249dbb0e72577380cd22041a6b17c62ee not found: ID does not exist" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.465092 4988 scope.go:117] "RemoveContainer" containerID="287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.465890 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "462112ad-1e32-4a30-9daf-f31f47faa8ac" (UID: "462112ad-1e32-4a30-9daf-f31f47faa8ac"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.468020 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482369 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.482743 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482760 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.482781 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-metadata" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482789 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-metadata" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.482808 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-log" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482813 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-log" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.482835 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b3ee4a-8bc5-4cf2-a345-c47267451d60" containerName="nova-manage" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482841 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b3ee4a-8bc5-4cf2-a345-c47267451d60" containerName="nova-manage" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.482856 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.482861 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.483031 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-log" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.483049 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-log" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.483062 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" containerName="nova-api-api" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.483070 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" containerName="nova-metadata-metadata" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.483088 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b3ee4a-8bc5-4cf2-a345-c47267451d60" containerName="nova-manage" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.484071 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.489701 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.490071 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.498436 4988 scope.go:117] "RemoveContainer" containerID="782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.499286 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrw5n\" (UniqueName: \"kubernetes.io/projected/462112ad-1e32-4a30-9daf-f31f47faa8ac-kube-api-access-hrw5n\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.499310 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.499321 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.499330 4988 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/462112ad-1e32-4a30-9daf-f31f47faa8ac-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.530243 4988 scope.go:117] "RemoveContainer" containerID="287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.530974 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39\": container with ID starting with 287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39 not found: ID does not exist" containerID="287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.531009 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39"} err="failed to get container status \"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39\": rpc error: code = NotFound desc = could not find container \"287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39\": container with ID starting with 287662e9db71e6d38211c5c5121b736c346182d286ce607550f9d90c6c232d39 not found: ID does not exist" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.531034 4988 scope.go:117] "RemoveContainer" containerID="782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b" Nov 23 08:22:26 crc kubenswrapper[4988]: E1123 08:22:26.531563 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b\": container with ID starting with 782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b not found: ID does not exist" containerID="782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.531584 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b"} err="failed to get container status \"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b\": rpc error: code = NotFound desc = could not find container \"782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b\": container with ID starting with 782917cafc4d0e0367985a33041f1aacd6675131d34509442d45c8e2ab66ff9b not found: ID does not exist" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.534019 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d365f8-b00a-4a1d-b3e8-d467fda3c271" path="/var/lib/kubelet/pods/15d365f8-b00a-4a1d-b3e8-d467fda3c271/volumes" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.601118 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.601249 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxvhl\" (UniqueName: \"kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.601275 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.601315 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.705947 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.706626 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxvhl\" (UniqueName: \"kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.706686 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.706855 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.708295 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.711027 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.713084 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.736867 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.747033 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxvhl\" (UniqueName: \"kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl\") pod \"nova-api-0\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.757315 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.770741 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.773081 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.776577 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.781069 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.784698 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.812915 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.911704 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.911760 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.911804 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.911821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5vs\" (UniqueName: \"kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:26 crc kubenswrapper[4988]: I1123 08:22:26.911838 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.013547 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.014149 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.014207 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5vs\" (UniqueName: \"kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.014233 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.014452 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.014851 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.019963 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.020648 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.029872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.036480 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5vs\" (UniqueName: \"kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs\") pod \"nova-metadata-0\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.069748 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.100037 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.412847 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerStarted","Data":"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd"} Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.414292 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerStarted","Data":"7e8623514ba8efc00b750214d5e912a170ad130edf5f3caea8e9c98d2f5c0b22"} Nov 23 08:22:27 crc kubenswrapper[4988]: E1123 08:22:27.430445 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:27 crc kubenswrapper[4988]: E1123 08:22:27.432894 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:27 crc kubenswrapper[4988]: E1123 08:22:27.436114 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:27 crc kubenswrapper[4988]: E1123 08:22:27.436142 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:27 crc kubenswrapper[4988]: W1123 08:22:27.579508 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81bbc54c_288e_4ec1_a085_78e0ddf18d2d.slice/crio-dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6 WatchSource:0}: Error finding container dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6: Status 404 returned error can't find the container with id dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6 Nov 23 08:22:27 crc kubenswrapper[4988]: I1123 08:22:27.588333 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.430311 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerStarted","Data":"6f8ed6fa46e43d3dc6c51ce0397f217429b3d65727a574a6f768b4b2c53ae827"} Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.430364 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerStarted","Data":"e10a0ba2d59b9aeea5747dddbdea52543daebe8314190332dba7d0a315bb5442"} Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.430381 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerStarted","Data":"dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6"} Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.436819 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerStarted","Data":"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa"} Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.463744 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.463722503 podStartE2EDuration="2.463722503s" podCreationTimestamp="2025-11-23 08:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:28.453376591 +0000 UTC m=+5800.761889354" watchObservedRunningTime="2025-11-23 08:22:28.463722503 +0000 UTC m=+5800.772235266" Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.479432 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.479411196 podStartE2EDuration="2.479411196s" podCreationTimestamp="2025-11-23 08:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:28.47751639 +0000 UTC m=+5800.786029163" watchObservedRunningTime="2025-11-23 08:22:28.479411196 +0000 UTC m=+5800.787923969" Nov 23 08:22:28 crc kubenswrapper[4988]: I1123 08:22:28.514567 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462112ad-1e32-4a30-9daf-f31f47faa8ac" path="/var/lib/kubelet/pods/462112ad-1e32-4a30-9daf-f31f47faa8ac/volumes" Nov 23 08:22:32 crc kubenswrapper[4988]: I1123 08:22:32.100847 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:22:32 crc kubenswrapper[4988]: I1123 08:22:32.101594 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:22:32 crc kubenswrapper[4988]: E1123 08:22:32.430241 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:32 crc kubenswrapper[4988]: E1123 08:22:32.431616 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:32 crc kubenswrapper[4988]: E1123 08:22:32.433124 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:32 crc kubenswrapper[4988]: E1123 08:22:32.433364 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:36 crc kubenswrapper[4988]: I1123 08:22:36.813820 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:22:36 crc kubenswrapper[4988]: I1123 08:22:36.814904 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:22:37 crc kubenswrapper[4988]: I1123 08:22:37.100910 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:22:37 crc kubenswrapper[4988]: I1123 08:22:37.100960 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:22:37 crc kubenswrapper[4988]: E1123 08:22:37.430319 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:37 crc kubenswrapper[4988]: E1123 08:22:37.431514 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:37 crc kubenswrapper[4988]: E1123 08:22:37.433209 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:37 crc kubenswrapper[4988]: E1123 08:22:37.433251 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:37 crc kubenswrapper[4988]: I1123 08:22:37.896466 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:37 crc kubenswrapper[4988]: I1123 08:22:37.896584 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.87:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:38 crc kubenswrapper[4988]: I1123 08:22:38.113371 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:38 crc kubenswrapper[4988]: I1123 08:22:38.113383 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:22:42 crc kubenswrapper[4988]: E1123 08:22:42.430538 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14 is running failed: container process not found" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:42 crc kubenswrapper[4988]: E1123 08:22:42.431559 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14 is running failed: container process not found" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:42 crc kubenswrapper[4988]: E1123 08:22:42.432126 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14 is running failed: container process not found" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:22:42 crc kubenswrapper[4988]: E1123 08:22:42.432182 4988 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.606237 4988 generic.go:334] "Generic (PLEG): container finished" podID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" exitCode=137 Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.606278 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9925dff7-e396-49c7-8c34-5dab620f6d4a","Type":"ContainerDied","Data":"1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14"} Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.810613 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.868053 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brtd\" (UniqueName: \"kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd\") pod \"9925dff7-e396-49c7-8c34-5dab620f6d4a\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.869159 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data\") pod \"9925dff7-e396-49c7-8c34-5dab620f6d4a\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.869458 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle\") pod \"9925dff7-e396-49c7-8c34-5dab620f6d4a\" (UID: \"9925dff7-e396-49c7-8c34-5dab620f6d4a\") " Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.891793 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd" (OuterVolumeSpecName: "kube-api-access-4brtd") pod "9925dff7-e396-49c7-8c34-5dab620f6d4a" (UID: "9925dff7-e396-49c7-8c34-5dab620f6d4a"). InnerVolumeSpecName "kube-api-access-4brtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.892509 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brtd\" (UniqueName: \"kubernetes.io/projected/9925dff7-e396-49c7-8c34-5dab620f6d4a-kube-api-access-4brtd\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.899574 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9925dff7-e396-49c7-8c34-5dab620f6d4a" (UID: "9925dff7-e396-49c7-8c34-5dab620f6d4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.909316 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data" (OuterVolumeSpecName: "config-data") pod "9925dff7-e396-49c7-8c34-5dab620f6d4a" (UID: "9925dff7-e396-49c7-8c34-5dab620f6d4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.996302 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:42 crc kubenswrapper[4988]: I1123 08:22:42.996346 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9925dff7-e396-49c7-8c34-5dab620f6d4a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.624712 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9925dff7-e396-49c7-8c34-5dab620f6d4a","Type":"ContainerDied","Data":"781c1622c0d60fb50899f4969f707e30a2ff897b53547920ad83afa33e83f65c"} Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.624806 4988 scope.go:117] "RemoveContainer" containerID="1445529c86ccba26ff120c1e0bee47b3c89caff6ee0caf6baaa7b227a8f6ea14" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.624812 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.703128 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.730378 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.743615 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:43 crc kubenswrapper[4988]: E1123 08:22:43.744095 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.744110 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.744459 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" containerName="nova-scheduler-scheduler" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.745175 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.745280 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.749681 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.811354 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2bd\" (UniqueName: \"kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.811576 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.811719 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.913722 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.913846 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.914032 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h2bd\" (UniqueName: \"kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.920171 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.923756 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:43 crc kubenswrapper[4988]: I1123 08:22:43.947845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h2bd\" (UniqueName: \"kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd\") pod \"nova-scheduler-0\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " pod="openstack/nova-scheduler-0" Nov 23 08:22:44 crc kubenswrapper[4988]: I1123 08:22:44.088507 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:22:44 crc kubenswrapper[4988]: I1123 08:22:44.506764 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9925dff7-e396-49c7-8c34-5dab620f6d4a" path="/var/lib/kubelet/pods/9925dff7-e396-49c7-8c34-5dab620f6d4a/volumes" Nov 23 08:22:44 crc kubenswrapper[4988]: I1123 08:22:44.592649 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:22:44 crc kubenswrapper[4988]: I1123 08:22:44.642444 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"836ddb4b-a468-45d0-9f73-48fcea741926","Type":"ContainerStarted","Data":"84d1b607cea2f08f28d2c01f881e56a197598467ffe010b66691f4701a694655"} Nov 23 08:22:45 crc kubenswrapper[4988]: I1123 08:22:45.663349 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"836ddb4b-a468-45d0-9f73-48fcea741926","Type":"ContainerStarted","Data":"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b"} Nov 23 08:22:45 crc kubenswrapper[4988]: I1123 08:22:45.708488 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.708453374 podStartE2EDuration="2.708453374s" podCreationTimestamp="2025-11-23 08:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:45.696613065 +0000 UTC m=+5818.005125858" watchObservedRunningTime="2025-11-23 08:22:45.708453374 +0000 UTC m=+5818.016966177" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.817995 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.818626 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.818796 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.819258 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.822461 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:22:46 crc kubenswrapper[4988]: I1123 08:22:46.829467 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.013223 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.017631 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.035400 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.075321 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.075364 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfgm7\" (UniqueName: \"kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.075403 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.075432 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.075750 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.107156 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.116544 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.116653 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.178079 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.178141 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfgm7\" (UniqueName: \"kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.178264 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.178330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.178597 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.179127 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.179384 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.179761 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.179795 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.200915 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfgm7\" (UniqueName: \"kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7\") pod \"dnsmasq-dns-54d57df949-wlqch\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.340920 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.697542 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:22:47 crc kubenswrapper[4988]: I1123 08:22:47.896799 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:22:48 crc kubenswrapper[4988]: I1123 08:22:48.694738 4988 generic.go:334] "Generic (PLEG): container finished" podID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerID="d9fbe7927c6a2a72fb39e808a60b805552e358fc9883260774564fc1fef1980b" exitCode=0 Nov 23 08:22:48 crc kubenswrapper[4988]: I1123 08:22:48.694818 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d57df949-wlqch" event={"ID":"32c53be8-ebd9-4e99-8395-c93637b97a50","Type":"ContainerDied","Data":"d9fbe7927c6a2a72fb39e808a60b805552e358fc9883260774564fc1fef1980b"} Nov 23 08:22:48 crc kubenswrapper[4988]: I1123 08:22:48.695294 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d57df949-wlqch" event={"ID":"32c53be8-ebd9-4e99-8395-c93637b97a50","Type":"ContainerStarted","Data":"c53c55a20c7687e752e0df17f8ef8536ba70a1f85b1e5a1dca863544e1f6cdd2"} Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.089488 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.442795 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.708862 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d57df949-wlqch" event={"ID":"32c53be8-ebd9-4e99-8395-c93637b97a50","Type":"ContainerStarted","Data":"95616d463b906dcc43bcaec1a64844fa42fc36325d4eb4c48b0eb95f5994fa60"} Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.709097 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-log" containerID="cri-o://8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd" gracePeriod=30 Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.709261 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-api" containerID="cri-o://4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa" gracePeriod=30 Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.709807 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:49 crc kubenswrapper[4988]: I1123 08:22:49.744176 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54d57df949-wlqch" podStartSLOduration=3.744152118 podStartE2EDuration="3.744152118s" podCreationTimestamp="2025-11-23 08:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:49.737968707 +0000 UTC m=+5822.046481470" watchObservedRunningTime="2025-11-23 08:22:49.744152118 +0000 UTC m=+5822.052664921" Nov 23 08:22:50 crc kubenswrapper[4988]: I1123 08:22:50.719285 4988 generic.go:334] "Generic (PLEG): container finished" podID="637d87fd-a50f-470a-8632-176d44940e75" containerID="8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd" exitCode=143 Nov 23 08:22:50 crc kubenswrapper[4988]: I1123 08:22:50.719380 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerDied","Data":"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd"} Nov 23 08:22:51 crc kubenswrapper[4988]: I1123 08:22:51.672959 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:22:51 crc kubenswrapper[4988]: I1123 08:22:51.673028 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.278026 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.406990 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs\") pod \"637d87fd-a50f-470a-8632-176d44940e75\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.407093 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxvhl\" (UniqueName: \"kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl\") pod \"637d87fd-a50f-470a-8632-176d44940e75\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.407153 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") pod \"637d87fd-a50f-470a-8632-176d44940e75\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.407358 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data\") pod \"637d87fd-a50f-470a-8632-176d44940e75\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.407577 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs" (OuterVolumeSpecName: "logs") pod "637d87fd-a50f-470a-8632-176d44940e75" (UID: "637d87fd-a50f-470a-8632-176d44940e75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.408165 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/637d87fd-a50f-470a-8632-176d44940e75-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.435344 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl" (OuterVolumeSpecName: "kube-api-access-dxvhl") pod "637d87fd-a50f-470a-8632-176d44940e75" (UID: "637d87fd-a50f-470a-8632-176d44940e75"). InnerVolumeSpecName "kube-api-access-dxvhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:53 crc kubenswrapper[4988]: E1123 08:22:53.445457 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle podName:637d87fd-a50f-470a-8632-176d44940e75 nodeName:}" failed. No retries permitted until 2025-11-23 08:22:53.945423933 +0000 UTC m=+5826.253936716 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle") pod "637d87fd-a50f-470a-8632-176d44940e75" (UID: "637d87fd-a50f-470a-8632-176d44940e75") : error deleting /var/lib/kubelet/pods/637d87fd-a50f-470a-8632-176d44940e75/volume-subpaths: remove /var/lib/kubelet/pods/637d87fd-a50f-470a-8632-176d44940e75/volume-subpaths: no such file or directory Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.453373 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data" (OuterVolumeSpecName: "config-data") pod "637d87fd-a50f-470a-8632-176d44940e75" (UID: "637d87fd-a50f-470a-8632-176d44940e75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.513412 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxvhl\" (UniqueName: \"kubernetes.io/projected/637d87fd-a50f-470a-8632-176d44940e75-kube-api-access-dxvhl\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.513759 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.763826 4988 generic.go:334] "Generic (PLEG): container finished" podID="637d87fd-a50f-470a-8632-176d44940e75" containerID="4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa" exitCode=0 Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.763881 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerDied","Data":"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa"} Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.763906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"637d87fd-a50f-470a-8632-176d44940e75","Type":"ContainerDied","Data":"7e8623514ba8efc00b750214d5e912a170ad130edf5f3caea8e9c98d2f5c0b22"} Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.763923 4988 scope.go:117] "RemoveContainer" containerID="4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.763938 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.787896 4988 scope.go:117] "RemoveContainer" containerID="8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.810498 4988 scope.go:117] "RemoveContainer" containerID="4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa" Nov 23 08:22:53 crc kubenswrapper[4988]: E1123 08:22:53.810835 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa\": container with ID starting with 4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa not found: ID does not exist" containerID="4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.810860 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa"} err="failed to get container status \"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa\": rpc error: code = NotFound desc = could not find container \"4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa\": container with ID starting with 4fc903cf158ee1550f4edf9529d95c8be6c8c42deb2fb708fcb649ae0b828baa not found: ID does not exist" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.810878 4988 scope.go:117] "RemoveContainer" containerID="8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd" Nov 23 08:22:53 crc kubenswrapper[4988]: E1123 08:22:53.811135 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd\": container with ID starting with 8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd not found: ID does not exist" containerID="8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd" Nov 23 08:22:53 crc kubenswrapper[4988]: I1123 08:22:53.811156 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd"} err="failed to get container status \"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd\": rpc error: code = NotFound desc = could not find container \"8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd\": container with ID starting with 8ab2e3fedb5c66ec82313b27faa61fd8644a1596077b816d95ec0440058068dd not found: ID does not exist" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.025471 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") pod \"637d87fd-a50f-470a-8632-176d44940e75\" (UID: \"637d87fd-a50f-470a-8632-176d44940e75\") " Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.039367 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "637d87fd-a50f-470a-8632-176d44940e75" (UID: "637d87fd-a50f-470a-8632-176d44940e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.089685 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.117370 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.131052 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/637d87fd-a50f-470a-8632-176d44940e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.133040 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.142257 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.160835 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:54 crc kubenswrapper[4988]: E1123 08:22:54.161666 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-log" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.161688 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-log" Nov 23 08:22:54 crc kubenswrapper[4988]: E1123 08:22:54.161747 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-api" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.161757 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-api" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.161985 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-log" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.162031 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="637d87fd-a50f-470a-8632-176d44940e75" containerName="nova-api-api" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.163333 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.170796 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.170856 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.171057 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.171101 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232547 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232603 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgfk\" (UniqueName: \"kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232628 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232654 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232765 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.232810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335282 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335556 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335643 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgfk\" (UniqueName: \"kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335697 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335760 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.335887 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.336342 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.341829 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.345351 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.345556 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.348032 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.357453 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgfk\" (UniqueName: \"kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk\") pod \"nova-api-0\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.491317 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.514247 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="637d87fd-a50f-470a-8632-176d44940e75" path="/var/lib/kubelet/pods/637d87fd-a50f-470a-8632-176d44940e75/volumes" Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.803390 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:22:54 crc kubenswrapper[4988]: I1123 08:22:54.830504 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 08:22:55 crc kubenswrapper[4988]: I1123 08:22:55.810923 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerStarted","Data":"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8"} Nov 23 08:22:55 crc kubenswrapper[4988]: I1123 08:22:55.811252 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerStarted","Data":"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc"} Nov 23 08:22:55 crc kubenswrapper[4988]: I1123 08:22:55.811266 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerStarted","Data":"40be55423c5b411fed78d2bf1d8c0c2e10dfbfb64d003b9b26fd44d2890a3d10"} Nov 23 08:22:55 crc kubenswrapper[4988]: I1123 08:22:55.836026 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.836003469 podStartE2EDuration="1.836003469s" podCreationTimestamp="2025-11-23 08:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:22:55.830541466 +0000 UTC m=+5828.139054219" watchObservedRunningTime="2025-11-23 08:22:55.836003469 +0000 UTC m=+5828.144516262" Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.343414 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.412586 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.412818 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="dnsmasq-dns" containerID="cri-o://39bc3f3737af5376963346319441940e77494c946c081ececa8a85e6a81833a4" gracePeriod=10 Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.831361 4988 generic.go:334] "Generic (PLEG): container finished" podID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerID="39bc3f3737af5376963346319441940e77494c946c081ececa8a85e6a81833a4" exitCode=0 Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.831448 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" event={"ID":"c49b62c3-be8d-4852-b42a-87aa039dbf10","Type":"ContainerDied","Data":"39bc3f3737af5376963346319441940e77494c946c081ececa8a85e6a81833a4"} Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.831676 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" event={"ID":"c49b62c3-be8d-4852-b42a-87aa039dbf10","Type":"ContainerDied","Data":"931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017"} Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.831692 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931758f3dd1502e5e843010a6adec803a7ae11a7324cb3c74ecfd4ca1bcc7017" Nov 23 08:22:57 crc kubenswrapper[4988]: I1123 08:22:57.910793 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.004024 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config\") pod \"c49b62c3-be8d-4852-b42a-87aa039dbf10\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.004144 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb\") pod \"c49b62c3-be8d-4852-b42a-87aa039dbf10\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.004244 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc\") pod \"c49b62c3-be8d-4852-b42a-87aa039dbf10\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.004383 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcrhp\" (UniqueName: \"kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp\") pod \"c49b62c3-be8d-4852-b42a-87aa039dbf10\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.004433 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb\") pod \"c49b62c3-be8d-4852-b42a-87aa039dbf10\" (UID: \"c49b62c3-be8d-4852-b42a-87aa039dbf10\") " Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.019031 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp" (OuterVolumeSpecName: "kube-api-access-wcrhp") pod "c49b62c3-be8d-4852-b42a-87aa039dbf10" (UID: "c49b62c3-be8d-4852-b42a-87aa039dbf10"). InnerVolumeSpecName "kube-api-access-wcrhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.054002 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c49b62c3-be8d-4852-b42a-87aa039dbf10" (UID: "c49b62c3-be8d-4852-b42a-87aa039dbf10"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.056003 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config" (OuterVolumeSpecName: "config") pod "c49b62c3-be8d-4852-b42a-87aa039dbf10" (UID: "c49b62c3-be8d-4852-b42a-87aa039dbf10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.070050 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c49b62c3-be8d-4852-b42a-87aa039dbf10" (UID: "c49b62c3-be8d-4852-b42a-87aa039dbf10"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.076301 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c49b62c3-be8d-4852-b42a-87aa039dbf10" (UID: "c49b62c3-be8d-4852-b42a-87aa039dbf10"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.116985 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.117021 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcrhp\" (UniqueName: \"kubernetes.io/projected/c49b62c3-be8d-4852-b42a-87aa039dbf10-kube-api-access-wcrhp\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.117038 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.117049 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.117060 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c49b62c3-be8d-4852-b42a-87aa039dbf10-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.662173 4988 scope.go:117] "RemoveContainer" containerID="89aa7a47a5fc30e7d9488bc83e7168d50b29ec721bebb12457f6a3cee56081da" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.725546 4988 scope.go:117] "RemoveContainer" containerID="302a76d92ffad81b3acbda152c6e1fa4a725fa84b10a6937e62d6a3cd427b913" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.846160 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bcb8b5c7-tc6wz" Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.888526 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:22:58 crc kubenswrapper[4988]: I1123 08:22:58.896161 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bcb8b5c7-tc6wz"] Nov 23 08:23:00 crc kubenswrapper[4988]: I1123 08:23:00.515437 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" path="/var/lib/kubelet/pods/c49b62c3-be8d-4852-b42a-87aa039dbf10/volumes" Nov 23 08:23:04 crc kubenswrapper[4988]: I1123 08:23:04.492258 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:23:04 crc kubenswrapper[4988]: I1123 08:23:04.492867 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:23:05 crc kubenswrapper[4988]: I1123 08:23:05.502358 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.91:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:23:05 crc kubenswrapper[4988]: I1123 08:23:05.502386 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.91:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.516141 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.516580 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.516975 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.517025 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.528799 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:23:14 crc kubenswrapper[4988]: I1123 08:23:14.532807 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:23:21 crc kubenswrapper[4988]: I1123 08:23:21.672489 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:23:21 crc kubenswrapper[4988]: I1123 08:23:21.673230 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:23:21 crc kubenswrapper[4988]: I1123 08:23:21.673304 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:23:21 crc kubenswrapper[4988]: I1123 08:23:21.674450 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:23:21 crc kubenswrapper[4988]: I1123 08:23:21.674543 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" gracePeriod=600 Nov 23 08:23:21 crc kubenswrapper[4988]: E1123 08:23:21.810909 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:23:22 crc kubenswrapper[4988]: I1123 08:23:22.130588 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" exitCode=0 Nov 23 08:23:22 crc kubenswrapper[4988]: I1123 08:23:22.130688 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84"} Nov 23 08:23:22 crc kubenswrapper[4988]: I1123 08:23:22.131069 4988 scope.go:117] "RemoveContainer" containerID="44e4e57ae61d4bbe83fcc675e7532f3d05f6f4998ae2b06e2a67a208ef247d5c" Nov 23 08:23:22 crc kubenswrapper[4988]: I1123 08:23:22.132027 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:23:22 crc kubenswrapper[4988]: E1123 08:23:22.132615 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.084626 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:23:27 crc kubenswrapper[4988]: E1123 08:23:27.085540 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="dnsmasq-dns" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.085557 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="dnsmasq-dns" Nov 23 08:23:27 crc kubenswrapper[4988]: E1123 08:23:27.085605 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="init" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.085613 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="init" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.085852 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49b62c3-be8d-4852-b42a-87aa039dbf10" containerName="dnsmasq-dns" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.087095 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.090363 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.090452 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-xxjtq" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.090457 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.100922 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.110790 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.141148 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.141473 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-log" containerID="cri-o://c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868" gracePeriod=30 Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.141729 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-httpd" containerID="cri-o://c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1" gracePeriod=30 Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.179257 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.181116 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.197426 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.206971 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.207382 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-log" containerID="cri-o://6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9" gracePeriod=30 Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.208374 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-httpd" containerID="cri-o://aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d" gracePeriod=30 Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.241970 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.242012 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.242030 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.242048 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhgb\" (UniqueName: \"kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.242070 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.346009 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.346425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.346740 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.346866 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.346985 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347228 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhgb\" (UniqueName: \"kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347394 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347501 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klgk6\" (UniqueName: \"kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347687 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.347969 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.348534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.349494 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.352785 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.367279 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhgb\" (UniqueName: \"kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb\") pod \"horizon-9fbffcd5f-x5rxw\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.420995 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.452440 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.452501 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.452598 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.452639 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klgk6\" (UniqueName: \"kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.452698 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.455124 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.455469 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.457744 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.460328 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.477264 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klgk6\" (UniqueName: \"kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6\") pod \"horizon-749775bd49-v4zw6\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.502944 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.909307 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:23:27 crc kubenswrapper[4988]: I1123 08:23:27.910542 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:23:28 crc kubenswrapper[4988]: W1123 08:23:28.014834 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38cc9a1_597b_48d9_95bd_d9c39a91d46a.slice/crio-13cd84cb1c31ca909002b12e9c2b2e7d2428f0ed315dd9d64fa1ca474ca46f51 WatchSource:0}: Error finding container 13cd84cb1c31ca909002b12e9c2b2e7d2428f0ed315dd9d64fa1ca474ca46f51: Status 404 returned error can't find the container with id 13cd84cb1c31ca909002b12e9c2b2e7d2428f0ed315dd9d64fa1ca474ca46f51 Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.015165 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.229813 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerStarted","Data":"13cd84cb1c31ca909002b12e9c2b2e7d2428f0ed315dd9d64fa1ca474ca46f51"} Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.231276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerStarted","Data":"88f4fa91fae5c5bd0b59c6f01a12d5bcaab20177fab94361530d9fb8f9c675d0"} Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.234481 4988 generic.go:334] "Generic (PLEG): container finished" podID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerID="c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868" exitCode=143 Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.234565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerDied","Data":"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868"} Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.236880 4988 generic.go:334] "Generic (PLEG): container finished" podID="3106b206-7500-439c-a269-2c42f1a456d5" containerID="6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9" exitCode=143 Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.236930 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerDied","Data":"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9"} Nov 23 08:23:28 crc kubenswrapper[4988]: I1123 08:23:28.994824 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.014740 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.030413 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.037972 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.043904 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.052437 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.072274 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.073913 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.083091 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.196628 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.196983 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197037 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197062 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197112 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197134 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197161 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197225 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197274 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689pl\" (UniqueName: \"kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197293 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197310 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197342 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197386 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.197405 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.299249 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300162 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300244 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300277 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300328 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300343 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300359 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300389 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300433 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689pl\" (UniqueName: \"kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300453 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300466 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300500 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300539 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.300558 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.301108 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.301151 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.301158 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.302067 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.302525 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.305505 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.309608 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.310994 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.311717 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.312744 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.316432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.317497 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2\") pod \"horizon-77d7859794-vrzpf\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.317991 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.320160 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689pl\" (UniqueName: \"kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl\") pod \"horizon-79ccf4df9b-dnc54\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.372616 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.409978 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.870518 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:23:29 crc kubenswrapper[4988]: I1123 08:23:29.937977 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:23:29 crc kubenswrapper[4988]: W1123 08:23:29.947476 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfee15316_b0c1_4900_95fd_49110a4bab1a.slice/crio-442ff9e3f69c0a228e8c66fc70126f25e7de60106d79f88d6342a0a285edc168 WatchSource:0}: Error finding container 442ff9e3f69c0a228e8c66fc70126f25e7de60106d79f88d6342a0a285edc168: Status 404 returned error can't find the container with id 442ff9e3f69c0a228e8c66fc70126f25e7de60106d79f88d6342a0a285edc168 Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.259229 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerStarted","Data":"442ff9e3f69c0a228e8c66fc70126f25e7de60106d79f88d6342a0a285edc168"} Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.260587 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerStarted","Data":"fc793220a45f77025adbe4d4f711f609f22eba7206f7da27e468bbdba0047561"} Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.823121 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.870483 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933164 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933382 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mnt7\" (UniqueName: \"kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933447 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933475 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933507 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933541 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.933574 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data\") pod \"70c383dd-727a-48ae-9c3a-7128a82c4777\" (UID: \"70c383dd-727a-48ae-9c3a-7128a82c4777\") " Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.935745 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.936472 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs" (OuterVolumeSpecName: "logs") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.938900 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.939464 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70c383dd-727a-48ae-9c3a-7128a82c4777-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.942327 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts" (OuterVolumeSpecName: "scripts") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.942554 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7" (OuterVolumeSpecName: "kube-api-access-7mnt7") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "kube-api-access-7mnt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:23:30 crc kubenswrapper[4988]: I1123 08:23:30.962100 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043599 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043687 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct8cq\" (UniqueName: \"kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043738 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043762 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043787 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043813 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.043847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data\") pod \"3106b206-7500-439c-a269-2c42f1a456d5\" (UID: \"3106b206-7500-439c-a269-2c42f1a456d5\") " Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.044128 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mnt7\" (UniqueName: \"kubernetes.io/projected/70c383dd-727a-48ae-9c3a-7128a82c4777-kube-api-access-7mnt7\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.044145 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.044154 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.055795 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs" (OuterVolumeSpecName: "logs") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.056508 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.080356 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts" (OuterVolumeSpecName: "scripts") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.080707 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq" (OuterVolumeSpecName: "kube-api-access-ct8cq") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "kube-api-access-ct8cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.139311 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.158998 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.159024 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct8cq\" (UniqueName: \"kubernetes.io/projected/3106b206-7500-439c-a269-2c42f1a456d5-kube-api-access-ct8cq\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.159060 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.159072 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.159080 4988 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3106b206-7500-439c-a269-2c42f1a456d5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.163416 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data" (OuterVolumeSpecName: "config-data") pod "70c383dd-727a-48ae-9c3a-7128a82c4777" (UID: "70c383dd-727a-48ae-9c3a-7128a82c4777"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.167343 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.185332 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.206318 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data" (OuterVolumeSpecName: "config-data") pod "3106b206-7500-439c-a269-2c42f1a456d5" (UID: "3106b206-7500-439c-a269-2c42f1a456d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.261019 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.261062 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.261071 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3106b206-7500-439c-a269-2c42f1a456d5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.261080 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c383dd-727a-48ae-9c3a-7128a82c4777-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.285935 4988 generic.go:334] "Generic (PLEG): container finished" podID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerID="c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1" exitCode=0 Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.285996 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerDied","Data":"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1"} Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.286022 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"70c383dd-727a-48ae-9c3a-7128a82c4777","Type":"ContainerDied","Data":"e4f7298aa4d62834e6793ea11fdade7b7067562e42204d058cb86f6fcd67d410"} Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.286039 4988 scope.go:117] "RemoveContainer" containerID="c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.286151 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.307132 4988 generic.go:334] "Generic (PLEG): container finished" podID="3106b206-7500-439c-a269-2c42f1a456d5" containerID="aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d" exitCode=0 Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.307400 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.307829 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerDied","Data":"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d"} Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.308219 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3106b206-7500-439c-a269-2c42f1a456d5","Type":"ContainerDied","Data":"5b494213000b9c558e659e6ee9d809e0705736cf670d3147138c202a1bf13e4f"} Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.342120 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.363179 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.378328 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.378785 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.378799 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.378823 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.378831 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.378857 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.378863 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.378885 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.378890 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.379104 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.379122 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3106b206-7500-439c-a269-2c42f1a456d5" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.379133 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-log" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.379149 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" containerName="glance-httpd" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.382677 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.385737 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.385934 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pcqrh" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.386048 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.387654 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.394604 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.410091 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.425379 4988 scope.go:117] "RemoveContainer" containerID="c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.428809 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.439423 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.441037 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.443289 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.443497 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464553 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464591 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7chd\" (UniqueName: \"kubernetes.io/projected/516e27e0-3465-45ab-9f04-76306f952a0b-kube-api-access-w7chd\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464649 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464718 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464759 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464786 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464803 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464834 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464871 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464892 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464923 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464940 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464959 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwgw\" (UniqueName: \"kubernetes.io/projected/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-kube-api-access-kfwgw\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.464977 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.465359 4988 scope.go:117] "RemoveContainer" containerID="c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.466469 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1\": container with ID starting with c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1 not found: ID does not exist" containerID="c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.466498 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1"} err="failed to get container status \"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1\": rpc error: code = NotFound desc = could not find container \"c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1\": container with ID starting with c2aea8ac43b3117212ae80ae2ba81ac2c6b57614e364d85aa22ce370bf595df1 not found: ID does not exist" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.466520 4988 scope.go:117] "RemoveContainer" containerID="c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.468421 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868\": container with ID starting with c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868 not found: ID does not exist" containerID="c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.468462 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868"} err="failed to get container status \"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868\": rpc error: code = NotFound desc = could not find container \"c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868\": container with ID starting with c458566b3f72a1a31bc34c277a8cab6a5920a68ba20aa2d1c6b39ba99d621868 not found: ID does not exist" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.468486 4988 scope.go:117] "RemoveContainer" containerID="aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.475357 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.511272 4988 scope.go:117] "RemoveContainer" containerID="6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.547723 4988 scope.go:117] "RemoveContainer" containerID="aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.550038 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d\": container with ID starting with aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d not found: ID does not exist" containerID="aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.550074 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d"} err="failed to get container status \"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d\": rpc error: code = NotFound desc = could not find container \"aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d\": container with ID starting with aae7c79d67e0bdfcb6a477b6510f43bb1e5af0ca92ac7ce4c16e2579432a8b0d not found: ID does not exist" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.550100 4988 scope.go:117] "RemoveContainer" containerID="6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9" Nov 23 08:23:31 crc kubenswrapper[4988]: E1123 08:23:31.550930 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9\": container with ID starting with 6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9 not found: ID does not exist" containerID="6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.550959 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9"} err="failed to get container status \"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9\": rpc error: code = NotFound desc = could not find container \"6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9\": container with ID starting with 6e4086da869b82b44ba03bf7c709d1042296389b3cadbdf28642faa1523417b9 not found: ID does not exist" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566561 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566667 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566691 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566777 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566805 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566837 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwgw\" (UniqueName: \"kubernetes.io/projected/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-kube-api-access-kfwgw\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.566859 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567138 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567162 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7chd\" (UniqueName: \"kubernetes.io/projected/516e27e0-3465-45ab-9f04-76306f952a0b-kube-api-access-w7chd\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567241 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567277 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567358 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567396 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567477 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567512 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.567854 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/516e27e0-3465-45ab-9f04-76306f952a0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.568439 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.568496 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.571021 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.572507 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.572540 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.573937 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.575862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.577448 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/516e27e0-3465-45ab-9f04-76306f952a0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.577958 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.584096 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7chd\" (UniqueName: \"kubernetes.io/projected/516e27e0-3465-45ab-9f04-76306f952a0b-kube-api-access-w7chd\") pod \"glance-default-internal-api-0\" (UID: \"516e27e0-3465-45ab-9f04-76306f952a0b\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.585822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwgw\" (UniqueName: \"kubernetes.io/projected/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-kube-api-access-kfwgw\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.587939 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf7d70c6-f3ab-4ff4-a49b-23401df56b9e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e\") " pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.779352 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:23:31 crc kubenswrapper[4988]: I1123 08:23:31.788879 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:32 crc kubenswrapper[4988]: I1123 08:23:32.510243 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3106b206-7500-439c-a269-2c42f1a456d5" path="/var/lib/kubelet/pods/3106b206-7500-439c-a269-2c42f1a456d5/volumes" Nov 23 08:23:32 crc kubenswrapper[4988]: I1123 08:23:32.511997 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c383dd-727a-48ae-9c3a-7128a82c4777" path="/var/lib/kubelet/pods/70c383dd-727a-48ae-9c3a-7128a82c4777/volumes" Nov 23 08:23:33 crc kubenswrapper[4988]: I1123 08:23:33.495934 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:23:33 crc kubenswrapper[4988]: E1123 08:23:33.496371 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:23:37 crc kubenswrapper[4988]: I1123 08:23:37.382934 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerStarted","Data":"5eff7259b5e9b42064fc03cfc94c87f6a26c2210d84ca74820726d1fde757599"} Nov 23 08:23:37 crc kubenswrapper[4988]: I1123 08:23:37.388518 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerStarted","Data":"fe05581fc8126762c4a76b701bf43004e7d1c1b6887183fa83b4640826b7af2b"} Nov 23 08:23:37 crc kubenswrapper[4988]: I1123 08:23:37.392053 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerStarted","Data":"f624f50fe15e5293fdfd37a83767d6b97602e59e4b8515727fd5b417be45ac93"} Nov 23 08:23:37 crc kubenswrapper[4988]: I1123 08:23:37.394388 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerStarted","Data":"64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e"} Nov 23 08:23:37 crc kubenswrapper[4988]: I1123 08:23:37.456882 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.220148 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.427405 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerStarted","Data":"acc71939f5a00c7d067a7f7ae2a9cd69ccba8739c4ce88e13b23ec21f9931eb3"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.427581 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9fbffcd5f-x5rxw" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon" containerID="cri-o://acc71939f5a00c7d067a7f7ae2a9cd69ccba8739c4ce88e13b23ec21f9931eb3" gracePeriod=30 Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.427606 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9fbffcd5f-x5rxw" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon-log" containerID="cri-o://fe05581fc8126762c4a76b701bf43004e7d1c1b6887183fa83b4640826b7af2b" gracePeriod=30 Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.441036 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e","Type":"ContainerStarted","Data":"36783f76728fcef5bacb871da82f428330d0a240378cf531818ec5eb068219e1"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.441086 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e","Type":"ContainerStarted","Data":"0dd9233b410ab64dad22d683eef52b07455c24fc43d7a6aabb7907ec6b8976db"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.442905 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"516e27e0-3465-45ab-9f04-76306f952a0b","Type":"ContainerStarted","Data":"00aff6130df47384a6bc38c625bb4a5fde14ec0a0a78da1290059742fa1d5e6c"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.450356 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerStarted","Data":"cee4eeec4e17d6158e4acd3052ef7932456dc388c11ab92de0d2b67d2e8a271f"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.457284 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9fbffcd5f-x5rxw" podStartSLOduration=2.379206044 podStartE2EDuration="11.457260914s" podCreationTimestamp="2025-11-23 08:23:27 +0000 UTC" firstStartedPulling="2025-11-23 08:23:27.910304426 +0000 UTC m=+5860.218817189" lastFinishedPulling="2025-11-23 08:23:36.988359286 +0000 UTC m=+5869.296872059" observedRunningTime="2025-11-23 08:23:38.449559166 +0000 UTC m=+5870.758071929" watchObservedRunningTime="2025-11-23 08:23:38.457260914 +0000 UTC m=+5870.765773677" Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.478365 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerStarted","Data":"5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.478455 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-749775bd49-v4zw6" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon-log" containerID="cri-o://64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e" gracePeriod=30 Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.478603 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-749775bd49-v4zw6" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon" containerID="cri-o://5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d" gracePeriod=30 Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.478851 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77d7859794-vrzpf" podStartSLOduration=3.4441714660000002 podStartE2EDuration="10.47883601s" podCreationTimestamp="2025-11-23 08:23:28 +0000 UTC" firstStartedPulling="2025-11-23 08:23:29.883838777 +0000 UTC m=+5862.192351530" lastFinishedPulling="2025-11-23 08:23:36.918503311 +0000 UTC m=+5869.227016074" observedRunningTime="2025-11-23 08:23:38.470302092 +0000 UTC m=+5870.778814855" watchObservedRunningTime="2025-11-23 08:23:38.47883601 +0000 UTC m=+5870.787348773" Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.484129 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerStarted","Data":"2bc9b1ba3ee656da3d3db7f4806251c70cfa46b31562602dfc5b28713c09068b"} Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.505227 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-749775bd49-v4zw6" podStartSLOduration=2.598287109 podStartE2EDuration="11.505185263s" podCreationTimestamp="2025-11-23 08:23:27 +0000 UTC" firstStartedPulling="2025-11-23 08:23:28.017343957 +0000 UTC m=+5860.325856720" lastFinishedPulling="2025-11-23 08:23:36.924242101 +0000 UTC m=+5869.232754874" observedRunningTime="2025-11-23 08:23:38.496592383 +0000 UTC m=+5870.805105156" watchObservedRunningTime="2025-11-23 08:23:38.505185263 +0000 UTC m=+5870.813698026" Nov 23 08:23:38 crc kubenswrapper[4988]: I1123 08:23:38.532916 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-79ccf4df9b-dnc54" podStartSLOduration=2.494050913 podStartE2EDuration="9.532895869s" podCreationTimestamp="2025-11-23 08:23:29 +0000 UTC" firstStartedPulling="2025-11-23 08:23:29.95036264 +0000 UTC m=+5862.258875403" lastFinishedPulling="2025-11-23 08:23:36.989207596 +0000 UTC m=+5869.297720359" observedRunningTime="2025-11-23 08:23:38.52187086 +0000 UTC m=+5870.830383633" watchObservedRunningTime="2025-11-23 08:23:38.532895869 +0000 UTC m=+5870.841408632" Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.373302 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.373714 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.411086 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.411128 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.495489 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"516e27e0-3465-45ab-9f04-76306f952a0b","Type":"ContainerStarted","Data":"5f13b17619dfe8066960d2cd558f7f079f6c308e87d29a1225f203b2db93279b"} Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.497759 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf7d70c6-f3ab-4ff4-a49b-23401df56b9e","Type":"ContainerStarted","Data":"7471e329ffc9b96cc7de5b559421bfaa96bac6a3c76415109b325d08120d7d51"} Nov 23 08:23:39 crc kubenswrapper[4988]: I1123 08:23:39.533382 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.533363549 podStartE2EDuration="8.533363549s" podCreationTimestamp="2025-11-23 08:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:23:39.529225518 +0000 UTC m=+5871.837738291" watchObservedRunningTime="2025-11-23 08:23:39.533363549 +0000 UTC m=+5871.841876312" Nov 23 08:23:40 crc kubenswrapper[4988]: I1123 08:23:40.519326 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"516e27e0-3465-45ab-9f04-76306f952a0b","Type":"ContainerStarted","Data":"92a957ac3d3577fc5633bc10e25f0793ee89ed83d32d9875d3e632421e30fcac"} Nov 23 08:23:40 crc kubenswrapper[4988]: I1123 08:23:40.538542 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.538519094 podStartE2EDuration="9.538519094s" podCreationTimestamp="2025-11-23 08:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:23:40.532520767 +0000 UTC m=+5872.841033540" watchObservedRunningTime="2025-11-23 08:23:40.538519094 +0000 UTC m=+5872.847031867" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.779908 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.781877 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.790396 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.790446 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.874484 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.874574 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.901817 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:23:41 crc kubenswrapper[4988]: I1123 08:23:41.907720 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:42 crc kubenswrapper[4988]: I1123 08:23:42.529165 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:23:42 crc kubenswrapper[4988]: I1123 08:23:42.529211 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:23:42 crc kubenswrapper[4988]: I1123 08:23:42.529222 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:42 crc kubenswrapper[4988]: I1123 08:23:42.529233 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:45 crc kubenswrapper[4988]: I1123 08:23:45.899440 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:23:45 crc kubenswrapper[4988]: I1123 08:23:45.910825 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:23:45 crc kubenswrapper[4988]: I1123 08:23:45.999785 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:46 crc kubenswrapper[4988]: I1123 08:23:46.030989 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:23:47 crc kubenswrapper[4988]: I1123 08:23:47.422033 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:23:47 crc kubenswrapper[4988]: I1123 08:23:47.503921 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:23:48 crc kubenswrapper[4988]: I1123 08:23:48.501814 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:23:48 crc kubenswrapper[4988]: E1123 08:23:48.502102 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:23:49 crc kubenswrapper[4988]: I1123 08:23:49.377420 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.94:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.94:8443: connect: connection refused" Nov 23 08:23:49 crc kubenswrapper[4988]: I1123 08:23:49.412050 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.95:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.95:8443: connect: connection refused" Nov 23 08:24:01 crc kubenswrapper[4988]: I1123 08:24:01.558285 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:24:01 crc kubenswrapper[4988]: I1123 08:24:01.590021 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.264959 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.408526 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.480408 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.496062 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:24:03 crc kubenswrapper[4988]: E1123 08:24:03.496462 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.733536 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" containerID="cri-o://cee4eeec4e17d6158e4acd3052ef7932456dc388c11ab92de0d2b67d2e8a271f" gracePeriod=30 Nov 23 08:24:03 crc kubenswrapper[4988]: I1123 08:24:03.733814 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon-log" containerID="cri-o://f624f50fe15e5293fdfd37a83767d6b97602e59e4b8515727fd5b417be45ac93" gracePeriod=30 Nov 23 08:24:07 crc kubenswrapper[4988]: I1123 08:24:07.778427 4988 generic.go:334] "Generic (PLEG): container finished" podID="8b25f631-1103-4844-89c0-02fb680c32d4" containerID="cee4eeec4e17d6158e4acd3052ef7932456dc388c11ab92de0d2b67d2e8a271f" exitCode=0 Nov 23 08:24:07 crc kubenswrapper[4988]: I1123 08:24:07.778579 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerDied","Data":"cee4eeec4e17d6158e4acd3052ef7932456dc388c11ab92de0d2b67d2e8a271f"} Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.793286 4988 generic.go:334] "Generic (PLEG): container finished" podID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerID="5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d" exitCode=137 Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.793645 4988 generic.go:334] "Generic (PLEG): container finished" podID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerID="64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e" exitCode=137 Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.793332 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerDied","Data":"5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d"} Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.793717 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerDied","Data":"64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e"} Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.803117 4988 generic.go:334] "Generic (PLEG): container finished" podID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerID="acc71939f5a00c7d067a7f7ae2a9cd69ccba8739c4ce88e13b23ec21f9931eb3" exitCode=137 Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.803152 4988 generic.go:334] "Generic (PLEG): container finished" podID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerID="fe05581fc8126762c4a76b701bf43004e7d1c1b6887183fa83b4640826b7af2b" exitCode=137 Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.803175 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerDied","Data":"acc71939f5a00c7d067a7f7ae2a9cd69ccba8739c4ce88e13b23ec21f9931eb3"} Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.803231 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerDied","Data":"fe05581fc8126762c4a76b701bf43004e7d1c1b6887183fa83b4640826b7af2b"} Nov 23 08:24:08 crc kubenswrapper[4988]: E1123 08:24:08.849397 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38cc9a1_597b_48d9_95bd_d9c39a91d46a.slice/crio-5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38cc9a1_597b_48d9_95bd_d9c39a91d46a.slice/crio-conmon-64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38cc9a1_597b_48d9_95bd_d9c39a91d46a.slice/crio-conmon-5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda38cc9a1_597b_48d9_95bd_d9c39a91d46a.slice/crio-64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.956365 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:24:08 crc kubenswrapper[4988]: I1123 08:24:08.963623 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.042921 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data\") pod \"3243f1b5-995a-48f5-9fea-4969228c6cf3\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043220 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhgb\" (UniqueName: \"kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb\") pod \"3243f1b5-995a-48f5-9fea-4969228c6cf3\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043245 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klgk6\" (UniqueName: \"kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6\") pod \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043298 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts\") pod \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043369 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key\") pod \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043402 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs\") pod \"3243f1b5-995a-48f5-9fea-4969228c6cf3\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043419 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key\") pod \"3243f1b5-995a-48f5-9fea-4969228c6cf3\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043458 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts\") pod \"3243f1b5-995a-48f5-9fea-4969228c6cf3\" (UID: \"3243f1b5-995a-48f5-9fea-4969228c6cf3\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043497 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs\") pod \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.043550 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data\") pod \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\" (UID: \"a38cc9a1-597b-48d9-95bd-d9c39a91d46a\") " Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.045494 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs" (OuterVolumeSpecName: "logs") pod "3243f1b5-995a-48f5-9fea-4969228c6cf3" (UID: "3243f1b5-995a-48f5-9fea-4969228c6cf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.046541 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs" (OuterVolumeSpecName: "logs") pod "a38cc9a1-597b-48d9-95bd-d9c39a91d46a" (UID: "a38cc9a1-597b-48d9-95bd-d9c39a91d46a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.049519 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6" (OuterVolumeSpecName: "kube-api-access-klgk6") pod "a38cc9a1-597b-48d9-95bd-d9c39a91d46a" (UID: "a38cc9a1-597b-48d9-95bd-d9c39a91d46a"). InnerVolumeSpecName "kube-api-access-klgk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.050347 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb" (OuterVolumeSpecName: "kube-api-access-vxhgb") pod "3243f1b5-995a-48f5-9fea-4969228c6cf3" (UID: "3243f1b5-995a-48f5-9fea-4969228c6cf3"). InnerVolumeSpecName "kube-api-access-vxhgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.050866 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a38cc9a1-597b-48d9-95bd-d9c39a91d46a" (UID: "a38cc9a1-597b-48d9-95bd-d9c39a91d46a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.054150 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3243f1b5-995a-48f5-9fea-4969228c6cf3" (UID: "3243f1b5-995a-48f5-9fea-4969228c6cf3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.070798 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data" (OuterVolumeSpecName: "config-data") pod "3243f1b5-995a-48f5-9fea-4969228c6cf3" (UID: "3243f1b5-995a-48f5-9fea-4969228c6cf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.071417 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data" (OuterVolumeSpecName: "config-data") pod "a38cc9a1-597b-48d9-95bd-d9c39a91d46a" (UID: "a38cc9a1-597b-48d9-95bd-d9c39a91d46a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.073948 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts" (OuterVolumeSpecName: "scripts") pod "a38cc9a1-597b-48d9-95bd-d9c39a91d46a" (UID: "a38cc9a1-597b-48d9-95bd-d9c39a91d46a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.075096 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts" (OuterVolumeSpecName: "scripts") pod "3243f1b5-995a-48f5-9fea-4969228c6cf3" (UID: "3243f1b5-995a-48f5-9fea-4969228c6cf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145797 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145829 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhgb\" (UniqueName: \"kubernetes.io/projected/3243f1b5-995a-48f5-9fea-4969228c6cf3-kube-api-access-vxhgb\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145839 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klgk6\" (UniqueName: \"kubernetes.io/projected/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-kube-api-access-klgk6\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145849 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145857 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145867 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3243f1b5-995a-48f5-9fea-4969228c6cf3-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145877 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3243f1b5-995a-48f5-9fea-4969228c6cf3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145887 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3243f1b5-995a-48f5-9fea-4969228c6cf3-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145894 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.145902 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38cc9a1-597b-48d9-95bd-d9c39a91d46a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.373867 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.94:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.94:8443: connect: connection refused" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.815061 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-749775bd49-v4zw6" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.815064 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-749775bd49-v4zw6" event={"ID":"a38cc9a1-597b-48d9-95bd-d9c39a91d46a","Type":"ContainerDied","Data":"13cd84cb1c31ca909002b12e9c2b2e7d2428f0ed315dd9d64fa1ca474ca46f51"} Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.815238 4988 scope.go:117] "RemoveContainer" containerID="5328974e4c14e1b4edc0a6834e686cb2ec5ba27a4126bf215439342975fc6f9d" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.817960 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9fbffcd5f-x5rxw" event={"ID":"3243f1b5-995a-48f5-9fea-4969228c6cf3","Type":"ContainerDied","Data":"88f4fa91fae5c5bd0b59c6f01a12d5bcaab20177fab94361530d9fb8f9c675d0"} Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.818026 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9fbffcd5f-x5rxw" Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.882660 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.891636 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-749775bd49-v4zw6"] Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.899233 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:24:09 crc kubenswrapper[4988]: I1123 08:24:09.905816 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9fbffcd5f-x5rxw"] Nov 23 08:24:10 crc kubenswrapper[4988]: I1123 08:24:10.044346 4988 scope.go:117] "RemoveContainer" containerID="64993773b7e0d8b7e49a775d8353e4069a88db7a2f387bae2e22a4187e9ae03e" Nov 23 08:24:10 crc kubenswrapper[4988]: I1123 08:24:10.073766 4988 scope.go:117] "RemoveContainer" containerID="acc71939f5a00c7d067a7f7ae2a9cd69ccba8739c4ce88e13b23ec21f9931eb3" Nov 23 08:24:10 crc kubenswrapper[4988]: I1123 08:24:10.283827 4988 scope.go:117] "RemoveContainer" containerID="fe05581fc8126762c4a76b701bf43004e7d1c1b6887183fa83b4640826b7af2b" Nov 23 08:24:10 crc kubenswrapper[4988]: I1123 08:24:10.508355 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" path="/var/lib/kubelet/pods/3243f1b5-995a-48f5-9fea-4969228c6cf3/volumes" Nov 23 08:24:10 crc kubenswrapper[4988]: I1123 08:24:10.509861 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" path="/var/lib/kubelet/pods/a38cc9a1-597b-48d9-95bd-d9c39a91d46a/volumes" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.834780 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:16 crc kubenswrapper[4988]: E1123 08:24:16.835796 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.835811 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: E1123 08:24:16.835821 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.835829 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: E1123 08:24:16.835843 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.835849 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: E1123 08:24:16.835871 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.835877 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.836056 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.836072 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.836085 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3243f1b5-995a-48f5-9fea-4969228c6cf3" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.836102 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a38cc9a1-597b-48d9-95bd-d9c39a91d46a" containerName="horizon-log" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.837465 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.862265 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.933409 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g8s\" (UniqueName: \"kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.933680 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:16 crc kubenswrapper[4988]: I1123 08:24:16.933807 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.035928 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.036010 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.036073 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88g8s\" (UniqueName: \"kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.036528 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.036614 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.064762 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88g8s\" (UniqueName: \"kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s\") pod \"certified-operators-gs9s2\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.216663 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.748116 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:17 crc kubenswrapper[4988]: I1123 08:24:17.930960 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerStarted","Data":"f279775aeb179a94fcc754e26c167f863bd514f61a25e31ab090b3678898c325"} Nov 23 08:24:18 crc kubenswrapper[4988]: I1123 08:24:18.510653 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:24:18 crc kubenswrapper[4988]: E1123 08:24:18.511118 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:24:18 crc kubenswrapper[4988]: I1123 08:24:18.939738 4988 generic.go:334] "Generic (PLEG): container finished" podID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerID="9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518" exitCode=0 Nov 23 08:24:18 crc kubenswrapper[4988]: I1123 08:24:18.939796 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerDied","Data":"9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518"} Nov 23 08:24:19 crc kubenswrapper[4988]: I1123 08:24:19.373903 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.94:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.94:8443: connect: connection refused" Nov 23 08:24:19 crc kubenswrapper[4988]: I1123 08:24:19.957273 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerStarted","Data":"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a"} Nov 23 08:24:20 crc kubenswrapper[4988]: I1123 08:24:20.971104 4988 generic.go:334] "Generic (PLEG): container finished" podID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerID="d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a" exitCode=0 Nov 23 08:24:20 crc kubenswrapper[4988]: I1123 08:24:20.971222 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerDied","Data":"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a"} Nov 23 08:24:21 crc kubenswrapper[4988]: I1123 08:24:21.988808 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerStarted","Data":"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0"} Nov 23 08:24:22 crc kubenswrapper[4988]: I1123 08:24:22.036976 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gs9s2" podStartSLOduration=3.527328091 podStartE2EDuration="6.036945101s" podCreationTimestamp="2025-11-23 08:24:16 +0000 UTC" firstStartedPulling="2025-11-23 08:24:18.942088832 +0000 UTC m=+5911.250601595" lastFinishedPulling="2025-11-23 08:24:21.451705812 +0000 UTC m=+5913.760218605" observedRunningTime="2025-11-23 08:24:22.017581528 +0000 UTC m=+5914.326094381" watchObservedRunningTime="2025-11-23 08:24:22.036945101 +0000 UTC m=+5914.345457904" Nov 23 08:24:27 crc kubenswrapper[4988]: I1123 08:24:27.218043 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:27 crc kubenswrapper[4988]: I1123 08:24:27.218446 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:27 crc kubenswrapper[4988]: I1123 08:24:27.291919 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.078899 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7bfb-account-create-8plcz"] Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.098752 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-n8phq"] Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.117391 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7bfb-account-create-8plcz"] Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.128292 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-n8phq"] Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.147235 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.203251 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.514409 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9496c06f-ea1b-4017-8b27-3d9bec1df46d" path="/var/lib/kubelet/pods/9496c06f-ea1b-4017-8b27-3d9bec1df46d/volumes" Nov 23 08:24:28 crc kubenswrapper[4988]: I1123 08:24:28.515596 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe853168-6e13-40b0-9a8a-89399141ab0e" path="/var/lib/kubelet/pods/fe853168-6e13-40b0-9a8a-89399141ab0e/volumes" Nov 23 08:24:29 crc kubenswrapper[4988]: I1123 08:24:29.373619 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d7859794-vrzpf" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.94:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.94:8443: connect: connection refused" Nov 23 08:24:29 crc kubenswrapper[4988]: I1123 08:24:29.373737 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.099717 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gs9s2" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="registry-server" containerID="cri-o://f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0" gracePeriod=2 Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.678020 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.761689 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content\") pod \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.761753 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities\") pod \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.762105 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88g8s\" (UniqueName: \"kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s\") pod \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\" (UID: \"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec\") " Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.763097 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities" (OuterVolumeSpecName: "utilities") pod "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" (UID: "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.771048 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s" (OuterVolumeSpecName: "kube-api-access-88g8s") pod "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" (UID: "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec"). InnerVolumeSpecName "kube-api-access-88g8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.809669 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" (UID: "99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.864145 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88g8s\" (UniqueName: \"kubernetes.io/projected/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-kube-api-access-88g8s\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.864185 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:30 crc kubenswrapper[4988]: I1123 08:24:30.864261 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.114868 4988 generic.go:334] "Generic (PLEG): container finished" podID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerID="f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0" exitCode=0 Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.114925 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerDied","Data":"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0"} Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.114963 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs9s2" event={"ID":"99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec","Type":"ContainerDied","Data":"f279775aeb179a94fcc754e26c167f863bd514f61a25e31ab090b3678898c325"} Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.114963 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs9s2" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.114983 4988 scope.go:117] "RemoveContainer" containerID="f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.167861 4988 scope.go:117] "RemoveContainer" containerID="d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.196290 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.208400 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gs9s2"] Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.228705 4988 scope.go:117] "RemoveContainer" containerID="9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.272726 4988 scope.go:117] "RemoveContainer" containerID="f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0" Nov 23 08:24:31 crc kubenswrapper[4988]: E1123 08:24:31.275485 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0\": container with ID starting with f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0 not found: ID does not exist" containerID="f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.275546 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0"} err="failed to get container status \"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0\": rpc error: code = NotFound desc = could not find container \"f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0\": container with ID starting with f49e5ddfcfbf85905955e7f504f1b50b5113be931ec03a67cee2a26a2aaa63c0 not found: ID does not exist" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.275582 4988 scope.go:117] "RemoveContainer" containerID="d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a" Nov 23 08:24:31 crc kubenswrapper[4988]: E1123 08:24:31.275975 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a\": container with ID starting with d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a not found: ID does not exist" containerID="d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.276006 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a"} err="failed to get container status \"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a\": rpc error: code = NotFound desc = could not find container \"d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a\": container with ID starting with d5eba174d11e1d81d66284366761a198c6a1e39f7dad161b492210d99f2e7c8a not found: ID does not exist" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.276025 4988 scope.go:117] "RemoveContainer" containerID="9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518" Nov 23 08:24:31 crc kubenswrapper[4988]: E1123 08:24:31.276420 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518\": container with ID starting with 9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518 not found: ID does not exist" containerID="9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.276455 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518"} err="failed to get container status \"9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518\": rpc error: code = NotFound desc = could not find container \"9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518\": container with ID starting with 9411d87bd56bbb1923708bc3109cf0a9298a51d1df42c6b383031a18e5946518 not found: ID does not exist" Nov 23 08:24:31 crc kubenswrapper[4988]: I1123 08:24:31.496674 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:24:31 crc kubenswrapper[4988]: E1123 08:24:31.497351 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:24:32 crc kubenswrapper[4988]: I1123 08:24:32.512121 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" path="/var/lib/kubelet/pods/99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec/volumes" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.148392 4988 generic.go:334] "Generic (PLEG): container finished" podID="8b25f631-1103-4844-89c0-02fb680c32d4" containerID="f624f50fe15e5293fdfd37a83767d6b97602e59e4b8515727fd5b417be45ac93" exitCode=137 Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.148482 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerDied","Data":"f624f50fe15e5293fdfd37a83767d6b97602e59e4b8515727fd5b417be45ac93"} Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.149305 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d7859794-vrzpf" event={"ID":"8b25f631-1103-4844-89c0-02fb680c32d4","Type":"ContainerDied","Data":"fc793220a45f77025adbe4d4f711f609f22eba7206f7da27e468bbdba0047561"} Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.149328 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc793220a45f77025adbe4d4f711f609f22eba7206f7da27e468bbdba0047561" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.241823 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336434 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336561 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336599 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336640 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336756 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336792 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.336889 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key\") pod \"8b25f631-1103-4844-89c0-02fb680c32d4\" (UID: \"8b25f631-1103-4844-89c0-02fb680c32d4\") " Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.337572 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs" (OuterVolumeSpecName: "logs") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.343439 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2" (OuterVolumeSpecName: "kube-api-access-6p6z2") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "kube-api-access-6p6z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.347399 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.367128 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data" (OuterVolumeSpecName: "config-data") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.373866 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts" (OuterVolumeSpecName: "scripts") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.404747 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.425528 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8b25f631-1103-4844-89c0-02fb680c32d4" (UID: "8b25f631-1103-4844-89c0-02fb680c32d4"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.440322 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.440625 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.440834 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.440988 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8b25f631-1103-4844-89c0-02fb680c32d4-kube-api-access-6p6z2\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.441091 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b25f631-1103-4844-89c0-02fb680c32d4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.441186 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b25f631-1103-4844-89c0-02fb680c32d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:34 crc kubenswrapper[4988]: I1123 08:24:34.441293 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b25f631-1103-4844-89c0-02fb680c32d4-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:35 crc kubenswrapper[4988]: I1123 08:24:35.161184 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d7859794-vrzpf" Nov 23 08:24:35 crc kubenswrapper[4988]: I1123 08:24:35.205387 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:24:35 crc kubenswrapper[4988]: I1123 08:24:35.220180 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77d7859794-vrzpf"] Nov 23 08:24:36 crc kubenswrapper[4988]: I1123 08:24:36.516454 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" path="/var/lib/kubelet/pods/8b25f631-1103-4844-89c0-02fb680c32d4/volumes" Nov 23 08:24:40 crc kubenswrapper[4988]: I1123 08:24:40.067571 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-j8ww7"] Nov 23 08:24:40 crc kubenswrapper[4988]: I1123 08:24:40.079249 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-j8ww7"] Nov 23 08:24:40 crc kubenswrapper[4988]: I1123 08:24:40.513007 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec" path="/var/lib/kubelet/pods/c9d07ad8-a6d6-4130-bc1e-b5c845e7e9ec/volumes" Nov 23 08:24:43 crc kubenswrapper[4988]: I1123 08:24:43.496114 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:24:43 crc kubenswrapper[4988]: E1123 08:24:43.497657 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.816106 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-767f5c4c7b-vjzcc"] Nov 23 08:24:44 crc kubenswrapper[4988]: E1123 08:24:44.818131 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="registry-server" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.818288 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="registry-server" Nov 23 08:24:44 crc kubenswrapper[4988]: E1123 08:24:44.818384 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon-log" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.818457 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon-log" Nov 23 08:24:44 crc kubenswrapper[4988]: E1123 08:24:44.818542 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="extract-utilities" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.818744 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="extract-utilities" Nov 23 08:24:44 crc kubenswrapper[4988]: E1123 08:24:44.818845 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="extract-content" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.818923 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="extract-content" Nov 23 08:24:44 crc kubenswrapper[4988]: E1123 08:24:44.819017 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.819094 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.819480 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon-log" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.819621 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b25f631-1103-4844-89c0-02fb680c32d4" containerName="horizon" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.819714 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ab567d-8f2d-4b2e-8b4b-08cefc01d0ec" containerName="registry-server" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.821155 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.828362 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-767f5c4c7b-vjzcc"] Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.879102 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.881980 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.888924 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965332 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-scripts\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965374 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgrs\" (UniqueName: \"kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965402 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965460 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-secret-key\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965482 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-tls-certs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965503 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-config-data\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965540 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead6449e-2c88-477e-97fb-1f6ad9bcc287-logs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965600 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79htv\" (UniqueName: \"kubernetes.io/projected/ead6449e-2c88-477e-97fb-1f6ad9bcc287-kube-api-access-79htv\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965621 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-combined-ca-bundle\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:44 crc kubenswrapper[4988]: I1123 08:24:44.965634 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067255 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-tls-certs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067300 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-config-data\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067366 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead6449e-2c88-477e-97fb-1f6ad9bcc287-logs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067431 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79htv\" (UniqueName: \"kubernetes.io/projected/ead6449e-2c88-477e-97fb-1f6ad9bcc287-kube-api-access-79htv\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067455 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-combined-ca-bundle\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067472 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067508 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-scripts\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067524 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgrs\" (UniqueName: \"kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067546 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067596 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-secret-key\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.067833 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ead6449e-2c88-477e-97fb-1f6ad9bcc287-logs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.068003 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.068093 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.068360 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-scripts\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.069345 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead6449e-2c88-477e-97fb-1f6ad9bcc287-config-data\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.073160 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-combined-ca-bundle\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.074573 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-secret-key\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.080822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead6449e-2c88-477e-97fb-1f6ad9bcc287-horizon-tls-certs\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.084127 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79htv\" (UniqueName: \"kubernetes.io/projected/ead6449e-2c88-477e-97fb-1f6ad9bcc287-kube-api-access-79htv\") pod \"horizon-767f5c4c7b-vjzcc\" (UID: \"ead6449e-2c88-477e-97fb-1f6ad9bcc287\") " pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.086459 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgrs\" (UniqueName: \"kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs\") pod \"redhat-marketplace-b2t29\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.145725 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.207682 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.652804 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-767f5c4c7b-vjzcc"] Nov 23 08:24:45 crc kubenswrapper[4988]: I1123 08:24:45.723004 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:24:45 crc kubenswrapper[4988]: W1123 08:24:45.729224 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799b49ae_a68c_4ef5_8761_60b881158277.slice/crio-fa431c4be783b586b0968dabdd599ba62012db6e7d25eb22771699ef6fa24f59 WatchSource:0}: Error finding container fa431c4be783b586b0968dabdd599ba62012db6e7d25eb22771699ef6fa24f59: Status 404 returned error can't find the container with id fa431c4be783b586b0968dabdd599ba62012db6e7d25eb22771699ef6fa24f59 Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.102552 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-xhcvh"] Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.105503 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.120690 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xhcvh"] Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.191841 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.191925 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rx8c\" (UniqueName: \"kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.210111 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-e035-account-create-rfmkd"] Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.211495 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.213538 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.218429 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e035-account-create-rfmkd"] Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.293736 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rx8c\" (UniqueName: \"kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.293843 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.293976 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.294023 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pt6c\" (UniqueName: \"kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.294921 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.299211 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767f5c4c7b-vjzcc" event={"ID":"ead6449e-2c88-477e-97fb-1f6ad9bcc287","Type":"ContainerStarted","Data":"de098a6ff4d9a32ee6bcd78fc0139048ef32fb4b0405e7068fd4bad8166071d1"} Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.299334 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767f5c4c7b-vjzcc" event={"ID":"ead6449e-2c88-477e-97fb-1f6ad9bcc287","Type":"ContainerStarted","Data":"19df67d58462db6ff87ca40815ed17a891de996f6031aa7600ddc97d448addbc"} Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.299351 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-767f5c4c7b-vjzcc" event={"ID":"ead6449e-2c88-477e-97fb-1f6ad9bcc287","Type":"ContainerStarted","Data":"d9f78764132d46e4835c75997917b4f4ab52f0a065b27295a2d0498ffb7ced81"} Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.301497 4988 generic.go:334] "Generic (PLEG): container finished" podID="799b49ae-a68c-4ef5-8761-60b881158277" containerID="2c5b9b49e719948754d8e1131804db2b0fbb31c526383b7930aae432a954e784" exitCode=0 Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.301538 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerDied","Data":"2c5b9b49e719948754d8e1131804db2b0fbb31c526383b7930aae432a954e784"} Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.301565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerStarted","Data":"fa431c4be783b586b0968dabdd599ba62012db6e7d25eb22771699ef6fa24f59"} Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.324791 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-767f5c4c7b-vjzcc" podStartSLOduration=2.3247711620000002 podStartE2EDuration="2.324771162s" podCreationTimestamp="2025-11-23 08:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:24:46.318029198 +0000 UTC m=+5938.626541971" watchObservedRunningTime="2025-11-23 08:24:46.324771162 +0000 UTC m=+5938.633283925" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.331541 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rx8c\" (UniqueName: \"kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c\") pod \"heat-db-create-xhcvh\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.395614 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pt6c\" (UniqueName: \"kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.395773 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.396515 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.412373 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pt6c\" (UniqueName: \"kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c\") pod \"heat-e035-account-create-rfmkd\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.477270 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.537726 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:46 crc kubenswrapper[4988]: W1123 08:24:46.949105 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f42972_e4b0_4647_ad19_b0a55d64ba09.slice/crio-0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154 WatchSource:0}: Error finding container 0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154: Status 404 returned error can't find the container with id 0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154 Nov 23 08:24:46 crc kubenswrapper[4988]: I1123 08:24:46.949992 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xhcvh"] Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.036171 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e035-account-create-rfmkd"] Nov 23 08:24:47 crc kubenswrapper[4988]: W1123 08:24:47.054128 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod864ed855_8f69_4030_bf86_653c71905588.slice/crio-16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95 WatchSource:0}: Error finding container 16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95: Status 404 returned error can't find the container with id 16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95 Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.310343 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e035-account-create-rfmkd" event={"ID":"864ed855-8f69-4030-bf86-653c71905588","Type":"ContainerStarted","Data":"7a53d4b9795e4ee665ab772395df9292ec2b8b9d69e0a021bf847dad50ecd747"} Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.310403 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e035-account-create-rfmkd" event={"ID":"864ed855-8f69-4030-bf86-653c71905588","Type":"ContainerStarted","Data":"16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95"} Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.311891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerStarted","Data":"384fd3712d5906c400d21fb6da296d343692bb4601908070db5d0b119fe5461f"} Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.316085 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xhcvh" event={"ID":"91f42972-e4b0-4647-ad19-b0a55d64ba09","Type":"ContainerStarted","Data":"b5050ceca04f5f7fef65fa9b96b9271d27de1e7c5dbd13f76d0bb41f87006f06"} Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.316118 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xhcvh" event={"ID":"91f42972-e4b0-4647-ad19-b0a55d64ba09","Type":"ContainerStarted","Data":"0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154"} Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.331796 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-e035-account-create-rfmkd" podStartSLOduration=1.33177803 podStartE2EDuration="1.33177803s" podCreationTimestamp="2025-11-23 08:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:24:47.321838508 +0000 UTC m=+5939.630351291" watchObservedRunningTime="2025-11-23 08:24:47.33177803 +0000 UTC m=+5939.640290793" Nov 23 08:24:47 crc kubenswrapper[4988]: I1123 08:24:47.371581 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-xhcvh" podStartSLOduration=1.371554411 podStartE2EDuration="1.371554411s" podCreationTimestamp="2025-11-23 08:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:24:47.359434765 +0000 UTC m=+5939.667947528" watchObservedRunningTime="2025-11-23 08:24:47.371554411 +0000 UTC m=+5939.680067184" Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.327393 4988 generic.go:334] "Generic (PLEG): container finished" podID="799b49ae-a68c-4ef5-8761-60b881158277" containerID="384fd3712d5906c400d21fb6da296d343692bb4601908070db5d0b119fe5461f" exitCode=0 Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.327443 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerDied","Data":"384fd3712d5906c400d21fb6da296d343692bb4601908070db5d0b119fe5461f"} Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.330023 4988 generic.go:334] "Generic (PLEG): container finished" podID="91f42972-e4b0-4647-ad19-b0a55d64ba09" containerID="b5050ceca04f5f7fef65fa9b96b9271d27de1e7c5dbd13f76d0bb41f87006f06" exitCode=0 Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.330089 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xhcvh" event={"ID":"91f42972-e4b0-4647-ad19-b0a55d64ba09","Type":"ContainerDied","Data":"b5050ceca04f5f7fef65fa9b96b9271d27de1e7c5dbd13f76d0bb41f87006f06"} Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.335181 4988 generic.go:334] "Generic (PLEG): container finished" podID="864ed855-8f69-4030-bf86-653c71905588" containerID="7a53d4b9795e4ee665ab772395df9292ec2b8b9d69e0a021bf847dad50ecd747" exitCode=0 Nov 23 08:24:48 crc kubenswrapper[4988]: I1123 08:24:48.335247 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e035-account-create-rfmkd" event={"ID":"864ed855-8f69-4030-bf86-653c71905588","Type":"ContainerDied","Data":"7a53d4b9795e4ee665ab772395df9292ec2b8b9d69e0a021bf847dad50ecd747"} Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.349581 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerStarted","Data":"d9919dda1196ba26ee07eeedfda35924f3b4db516372f595968c631c2d90f9e6"} Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.841669 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.848771 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.859468 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2t29" podStartSLOduration=3.4122437740000002 podStartE2EDuration="5.859443911s" podCreationTimestamp="2025-11-23 08:24:44 +0000 UTC" firstStartedPulling="2025-11-23 08:24:46.302807356 +0000 UTC m=+5938.611320119" lastFinishedPulling="2025-11-23 08:24:48.750007493 +0000 UTC m=+5941.058520256" observedRunningTime="2025-11-23 08:24:49.372517241 +0000 UTC m=+5941.681030014" watchObservedRunningTime="2025-11-23 08:24:49.859443911 +0000 UTC m=+5942.167956674" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.986499 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rx8c\" (UniqueName: \"kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c\") pod \"91f42972-e4b0-4647-ad19-b0a55d64ba09\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.987065 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts\") pod \"864ed855-8f69-4030-bf86-653c71905588\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.987154 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts\") pod \"91f42972-e4b0-4647-ad19-b0a55d64ba09\" (UID: \"91f42972-e4b0-4647-ad19-b0a55d64ba09\") " Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.987282 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pt6c\" (UniqueName: \"kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c\") pod \"864ed855-8f69-4030-bf86-653c71905588\" (UID: \"864ed855-8f69-4030-bf86-653c71905588\") " Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.988922 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "864ed855-8f69-4030-bf86-653c71905588" (UID: "864ed855-8f69-4030-bf86-653c71905588"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.989064 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91f42972-e4b0-4647-ad19-b0a55d64ba09" (UID: "91f42972-e4b0-4647-ad19-b0a55d64ba09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.989848 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f42972-e4b0-4647-ad19-b0a55d64ba09-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:49 crc kubenswrapper[4988]: I1123 08:24:49.989893 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/864ed855-8f69-4030-bf86-653c71905588-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.001137 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c" (OuterVolumeSpecName: "kube-api-access-2rx8c") pod "91f42972-e4b0-4647-ad19-b0a55d64ba09" (UID: "91f42972-e4b0-4647-ad19-b0a55d64ba09"). InnerVolumeSpecName "kube-api-access-2rx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.001441 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c" (OuterVolumeSpecName: "kube-api-access-9pt6c") pod "864ed855-8f69-4030-bf86-653c71905588" (UID: "864ed855-8f69-4030-bf86-653c71905588"). InnerVolumeSpecName "kube-api-access-9pt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.091258 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rx8c\" (UniqueName: \"kubernetes.io/projected/91f42972-e4b0-4647-ad19-b0a55d64ba09-kube-api-access-2rx8c\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.091313 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pt6c\" (UniqueName: \"kubernetes.io/projected/864ed855-8f69-4030-bf86-653c71905588-kube-api-access-9pt6c\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.374822 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e035-account-create-rfmkd" event={"ID":"864ed855-8f69-4030-bf86-653c71905588","Type":"ContainerDied","Data":"16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95"} Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.374880 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16e2bd2bcfb15701d525771eebcc2a0df888990fd7ccade53521014f92134b95" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.374878 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e035-account-create-rfmkd" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.377007 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xhcvh" Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.377308 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xhcvh" event={"ID":"91f42972-e4b0-4647-ad19-b0a55d64ba09","Type":"ContainerDied","Data":"0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154"} Nov 23 08:24:50 crc kubenswrapper[4988]: I1123 08:24:50.377400 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acb6a4b0dbb0e89f05c19b6e68f945c29fb473c6b93f5e569ebc9080957a154" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.627370 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qqk7n"] Nov 23 08:24:51 crc kubenswrapper[4988]: E1123 08:24:51.628495 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864ed855-8f69-4030-bf86-653c71905588" containerName="mariadb-account-create" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.628523 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="864ed855-8f69-4030-bf86-653c71905588" containerName="mariadb-account-create" Nov 23 08:24:51 crc kubenswrapper[4988]: E1123 08:24:51.628567 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f42972-e4b0-4647-ad19-b0a55d64ba09" containerName="mariadb-database-create" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.628582 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f42972-e4b0-4647-ad19-b0a55d64ba09" containerName="mariadb-database-create" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.628986 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="864ed855-8f69-4030-bf86-653c71905588" containerName="mariadb-account-create" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.629012 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f42972-e4b0-4647-ad19-b0a55d64ba09" containerName="mariadb-database-create" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.630128 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.632905 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4qqz9" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.633175 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.660317 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qqk7n"] Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.721040 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.721174 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kdb\" (UniqueName: \"kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.721323 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.823001 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kdb\" (UniqueName: \"kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.823093 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.823172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.828705 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.828918 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.846900 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kdb\" (UniqueName: \"kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb\") pod \"heat-db-sync-qqk7n\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:51 crc kubenswrapper[4988]: I1123 08:24:51.955439 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qqk7n" Nov 23 08:24:52 crc kubenswrapper[4988]: I1123 08:24:52.454333 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qqk7n"] Nov 23 08:24:52 crc kubenswrapper[4988]: W1123 08:24:52.457080 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod939696e4_9c22_448f_8269_8d57c545640e.slice/crio-dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4 WatchSource:0}: Error finding container dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4: Status 404 returned error can't find the container with id dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4 Nov 23 08:24:53 crc kubenswrapper[4988]: I1123 08:24:53.415081 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qqk7n" event={"ID":"939696e4-9c22-448f-8269-8d57c545640e","Type":"ContainerStarted","Data":"dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4"} Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.146581 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.146845 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.207833 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.207885 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.275929 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.475078 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:24:55 crc kubenswrapper[4988]: I1123 08:24:55.530591 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:24:56 crc kubenswrapper[4988]: I1123 08:24:56.496656 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:24:56 crc kubenswrapper[4988]: E1123 08:24:56.496885 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:24:57 crc kubenswrapper[4988]: I1123 08:24:57.461713 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2t29" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="registry-server" containerID="cri-o://d9919dda1196ba26ee07eeedfda35924f3b4db516372f595968c631c2d90f9e6" gracePeriod=2 Nov 23 08:24:58 crc kubenswrapper[4988]: I1123 08:24:58.475086 4988 generic.go:334] "Generic (PLEG): container finished" podID="799b49ae-a68c-4ef5-8761-60b881158277" containerID="d9919dda1196ba26ee07eeedfda35924f3b4db516372f595968c631c2d90f9e6" exitCode=0 Nov 23 08:24:58 crc kubenswrapper[4988]: I1123 08:24:58.475135 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerDied","Data":"d9919dda1196ba26ee07eeedfda35924f3b4db516372f595968c631c2d90f9e6"} Nov 23 08:24:59 crc kubenswrapper[4988]: I1123 08:24:59.003614 4988 scope.go:117] "RemoveContainer" containerID="f09db2d873d3009e0aaa3e744ba0a402cd8d5356e08c3f169765db376b60646d" Nov 23 08:24:59 crc kubenswrapper[4988]: I1123 08:24:59.757535 4988 scope.go:117] "RemoveContainer" containerID="91588a3597f6fb9f0f977e21834218a47a86aac3060d366815a90962157174d7" Nov 23 08:24:59 crc kubenswrapper[4988]: I1123 08:24:59.817086 4988 scope.go:117] "RemoveContainer" containerID="bc5fe75236f0abe1781cfe573d40b440305386e23c2de47904233c346adb618a" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.016413 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.106866 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities\") pod \"799b49ae-a68c-4ef5-8761-60b881158277\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.107268 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgrs\" (UniqueName: \"kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs\") pod \"799b49ae-a68c-4ef5-8761-60b881158277\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.107309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content\") pod \"799b49ae-a68c-4ef5-8761-60b881158277\" (UID: \"799b49ae-a68c-4ef5-8761-60b881158277\") " Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.107597 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities" (OuterVolumeSpecName: "utilities") pod "799b49ae-a68c-4ef5-8761-60b881158277" (UID: "799b49ae-a68c-4ef5-8761-60b881158277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.108239 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.112511 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs" (OuterVolumeSpecName: "kube-api-access-dvgrs") pod "799b49ae-a68c-4ef5-8761-60b881158277" (UID: "799b49ae-a68c-4ef5-8761-60b881158277"). InnerVolumeSpecName "kube-api-access-dvgrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.130611 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "799b49ae-a68c-4ef5-8761-60b881158277" (UID: "799b49ae-a68c-4ef5-8761-60b881158277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.209673 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgrs\" (UniqueName: \"kubernetes.io/projected/799b49ae-a68c-4ef5-8761-60b881158277-kube-api-access-dvgrs\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.209711 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/799b49ae-a68c-4ef5-8761-60b881158277-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.502791 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2t29" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.522525 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2t29" event={"ID":"799b49ae-a68c-4ef5-8761-60b881158277","Type":"ContainerDied","Data":"fa431c4be783b586b0968dabdd599ba62012db6e7d25eb22771699ef6fa24f59"} Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.522867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qqk7n" event={"ID":"939696e4-9c22-448f-8269-8d57c545640e","Type":"ContainerStarted","Data":"354d2c5c293b36ce7eb64e72f7c03d657f326243bb57ea86ad54c53973b64c7b"} Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.523057 4988 scope.go:117] "RemoveContainer" containerID="d9919dda1196ba26ee07eeedfda35924f3b4db516372f595968c631c2d90f9e6" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.536908 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qqk7n" podStartSLOduration=2.179094886 podStartE2EDuration="9.536833782s" podCreationTimestamp="2025-11-23 08:24:51 +0000 UTC" firstStartedPulling="2025-11-23 08:24:52.461122208 +0000 UTC m=+5944.769634981" lastFinishedPulling="2025-11-23 08:24:59.818861124 +0000 UTC m=+5952.127373877" observedRunningTime="2025-11-23 08:25:00.53265161 +0000 UTC m=+5952.841164383" watchObservedRunningTime="2025-11-23 08:25:00.536833782 +0000 UTC m=+5952.845346585" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.572010 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.581726 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2t29"] Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.595717 4988 scope.go:117] "RemoveContainer" containerID="384fd3712d5906c400d21fb6da296d343692bb4601908070db5d0b119fe5461f" Nov 23 08:25:00 crc kubenswrapper[4988]: I1123 08:25:00.619234 4988 scope.go:117] "RemoveContainer" containerID="2c5b9b49e719948754d8e1131804db2b0fbb31c526383b7930aae432a954e784" Nov 23 08:25:02 crc kubenswrapper[4988]: I1123 08:25:02.514379 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799b49ae-a68c-4ef5-8761-60b881158277" path="/var/lib/kubelet/pods/799b49ae-a68c-4ef5-8761-60b881158277/volumes" Nov 23 08:25:02 crc kubenswrapper[4988]: I1123 08:25:02.543371 4988 generic.go:334] "Generic (PLEG): container finished" podID="939696e4-9c22-448f-8269-8d57c545640e" containerID="354d2c5c293b36ce7eb64e72f7c03d657f326243bb57ea86ad54c53973b64c7b" exitCode=0 Nov 23 08:25:02 crc kubenswrapper[4988]: I1123 08:25:02.543415 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qqk7n" event={"ID":"939696e4-9c22-448f-8269-8d57c545640e","Type":"ContainerDied","Data":"354d2c5c293b36ce7eb64e72f7c03d657f326243bb57ea86ad54c53973b64c7b"} Nov 23 08:25:03 crc kubenswrapper[4988]: I1123 08:25:03.985168 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qqk7n" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.095744 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8kdb\" (UniqueName: \"kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb\") pod \"939696e4-9c22-448f-8269-8d57c545640e\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.095778 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle\") pod \"939696e4-9c22-448f-8269-8d57c545640e\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.095861 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data\") pod \"939696e4-9c22-448f-8269-8d57c545640e\" (UID: \"939696e4-9c22-448f-8269-8d57c545640e\") " Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.111532 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb" (OuterVolumeSpecName: "kube-api-access-z8kdb") pod "939696e4-9c22-448f-8269-8d57c545640e" (UID: "939696e4-9c22-448f-8269-8d57c545640e"). InnerVolumeSpecName "kube-api-access-z8kdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.124319 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "939696e4-9c22-448f-8269-8d57c545640e" (UID: "939696e4-9c22-448f-8269-8d57c545640e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.187001 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data" (OuterVolumeSpecName: "config-data") pod "939696e4-9c22-448f-8269-8d57c545640e" (UID: "939696e4-9c22-448f-8269-8d57c545640e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.198250 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8kdb\" (UniqueName: \"kubernetes.io/projected/939696e4-9c22-448f-8269-8d57c545640e-kube-api-access-z8kdb\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.198412 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.198479 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939696e4-9c22-448f-8269-8d57c545640e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.578305 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qqk7n" event={"ID":"939696e4-9c22-448f-8269-8d57c545640e","Type":"ContainerDied","Data":"dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4"} Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.578360 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb6101187508bd5f65c603fb45e5f41c05b5d47c434dd6835b078ac112fb0c4" Nov 23 08:25:04 crc kubenswrapper[4988]: I1123 08:25:04.578431 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qqk7n" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.681832 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:05 crc kubenswrapper[4988]: E1123 08:25:05.682417 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="extract-utilities" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682429 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="extract-utilities" Nov 23 08:25:05 crc kubenswrapper[4988]: E1123 08:25:05.682454 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="registry-server" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682461 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="registry-server" Nov 23 08:25:05 crc kubenswrapper[4988]: E1123 08:25:05.682479 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="extract-content" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682485 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="extract-content" Nov 23 08:25:05 crc kubenswrapper[4988]: E1123 08:25:05.682499 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939696e4-9c22-448f-8269-8d57c545640e" containerName="heat-db-sync" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682505 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="939696e4-9c22-448f-8269-8d57c545640e" containerName="heat-db-sync" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682660 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="799b49ae-a68c-4ef5-8761-60b881158277" containerName="registry-server" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.682675 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="939696e4-9c22-448f-8269-8d57c545640e" containerName="heat-db-sync" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.683587 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.688005 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.688621 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.693347 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4qqz9" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.703480 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.830388 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.830518 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.830558 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.830582 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572sh\" (UniqueName: \"kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.871179 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.872539 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.874628 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.880675 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.882016 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.887838 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.892381 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.902051 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.932267 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572sh\" (UniqueName: \"kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.932360 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.932473 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.932508 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.937678 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.940045 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.944929 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:05 crc kubenswrapper[4988]: I1123 08:25:05.965240 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572sh\" (UniqueName: \"kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh\") pod \"heat-engine-9dc8c4bfb-l8fb6\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.028339 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.034744 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.034913 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxzr\" (UniqueName: \"kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.034964 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.035052 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.035103 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.035127 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.035276 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.035327 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4lh\" (UniqueName: \"kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.139826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141267 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxzr\" (UniqueName: \"kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141331 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141469 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141548 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141579 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141602 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.141645 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4lh\" (UniqueName: \"kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.150878 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.153569 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.157606 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.159482 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.165376 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.174046 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxzr\" (UniqueName: \"kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr\") pod \"heat-api-6d85b8db74-6lnzm\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.174868 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.197074 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4lh\" (UniqueName: \"kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh\") pod \"heat-cfnapi-7cc8c7674f-2pf2g\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.209055 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.502044 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.514758 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.608761 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" event={"ID":"6c05dea4-b5f4-4951-8c2f-106c877369ea","Type":"ContainerStarted","Data":"dd627ec39f6babb5e5ea57d3a1a66f07d1791c35d6e237df877e9d4935661dbf"} Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.708745 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:06 crc kubenswrapper[4988]: I1123 08:25:06.949801 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:06 crc kubenswrapper[4988]: W1123 08:25:06.963599 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8749f81d_53bd_4174_b21b_66ee8f64d330.slice/crio-c2149ac851acb36563cf058cdf46cadeef890aec1b761eaae551843c3da0c424 WatchSource:0}: Error finding container c2149ac851acb36563cf058cdf46cadeef890aec1b761eaae551843c3da0c424: Status 404 returned error can't find the container with id c2149ac851acb36563cf058cdf46cadeef890aec1b761eaae551843c3da0c424 Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.255918 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.627073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" event={"ID":"8749f81d-53bd-4174-b21b-66ee8f64d330","Type":"ContainerStarted","Data":"c2149ac851acb36563cf058cdf46cadeef890aec1b761eaae551843c3da0c424"} Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.631563 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" event={"ID":"6c05dea4-b5f4-4951-8c2f-106c877369ea","Type":"ContainerStarted","Data":"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a"} Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.631620 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.635331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d85b8db74-6lnzm" event={"ID":"10f03070-433b-4e4a-ba16-014a6c6a3094","Type":"ContainerStarted","Data":"9207efa9129117c095e30c44cd25f665a0d6a4a37d98bb58c5b6647165524c82"} Nov 23 08:25:07 crc kubenswrapper[4988]: I1123 08:25:07.656814 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" podStartSLOduration=2.656793287 podStartE2EDuration="2.656793287s" podCreationTimestamp="2025-11-23 08:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:07.651327043 +0000 UTC m=+5959.959839806" watchObservedRunningTime="2025-11-23 08:25:07.656793287 +0000 UTC m=+5959.965306050" Nov 23 08:25:08 crc kubenswrapper[4988]: I1123 08:25:08.883375 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-767f5c4c7b-vjzcc" Nov 23 08:25:08 crc kubenswrapper[4988]: I1123 08:25:08.935741 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:25:08 crc kubenswrapper[4988]: I1123 08:25:08.935985 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon-log" containerID="cri-o://5eff7259b5e9b42064fc03cfc94c87f6a26c2210d84ca74820726d1fde757599" gracePeriod=30 Nov 23 08:25:08 crc kubenswrapper[4988]: I1123 08:25:08.936127 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" containerID="cri-o://2bc9b1ba3ee656da3d3db7f4806251c70cfa46b31562602dfc5b28713c09068b" gracePeriod=30 Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.653402 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d85b8db74-6lnzm" event={"ID":"10f03070-433b-4e4a-ba16-014a6c6a3094","Type":"ContainerStarted","Data":"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d"} Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.653805 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.655236 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" event={"ID":"8749f81d-53bd-4174-b21b-66ee8f64d330","Type":"ContainerStarted","Data":"7dc142a766d80f7632d597051ff2fe84b99c05a8e8a3f2c5ef9f3ef826ec060e"} Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.656286 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.674590 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d85b8db74-6lnzm" podStartSLOduration=2.665601132 podStartE2EDuration="4.674568097s" podCreationTimestamp="2025-11-23 08:25:05 +0000 UTC" firstStartedPulling="2025-11-23 08:25:06.721342693 +0000 UTC m=+5959.029855456" lastFinishedPulling="2025-11-23 08:25:08.730309658 +0000 UTC m=+5961.038822421" observedRunningTime="2025-11-23 08:25:09.665916426 +0000 UTC m=+5961.974429209" watchObservedRunningTime="2025-11-23 08:25:09.674568097 +0000 UTC m=+5961.983080860" Nov 23 08:25:09 crc kubenswrapper[4988]: I1123 08:25:09.690665 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" podStartSLOduration=2.92292838 podStartE2EDuration="4.690641649s" podCreationTimestamp="2025-11-23 08:25:05 +0000 UTC" firstStartedPulling="2025-11-23 08:25:06.965827538 +0000 UTC m=+5959.274340301" lastFinishedPulling="2025-11-23 08:25:08.733540807 +0000 UTC m=+5961.042053570" observedRunningTime="2025-11-23 08:25:09.686797975 +0000 UTC m=+5961.995310758" watchObservedRunningTime="2025-11-23 08:25:09.690641649 +0000 UTC m=+5961.999154412" Nov 23 08:25:11 crc kubenswrapper[4988]: I1123 08:25:11.496042 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:25:11 crc kubenswrapper[4988]: E1123 08:25:11.496525 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.111544 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.95:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:53462->10.217.1.95:8443: read: connection reset by peer" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.505978 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-b646bc85-sjc6z"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.507074 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.513432 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-b646bc85-sjc6z"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.521278 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.522348 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.536713 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.538001 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.549809 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.581117 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683495 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-combined-ca-bundle\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683563 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data-custom\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683599 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683640 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683656 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683673 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683699 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683728 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zvfl\" (UniqueName: \"kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683791 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683830 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvg26\" (UniqueName: \"kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683848 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.683876 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcv9k\" (UniqueName: \"kubernetes.io/projected/01c678f9-ada0-449b-b53e-d5831743585c-kube-api-access-hcv9k\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.687174 4988 generic.go:334] "Generic (PLEG): container finished" podID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerID="2bc9b1ba3ee656da3d3db7f4806251c70cfa46b31562602dfc5b28713c09068b" exitCode=0 Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.687221 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerDied","Data":"2bc9b1ba3ee656da3d3db7f4806251c70cfa46b31562602dfc5b28713c09068b"} Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.785918 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcv9k\" (UniqueName: \"kubernetes.io/projected/01c678f9-ada0-449b-b53e-d5831743585c-kube-api-access-hcv9k\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-combined-ca-bundle\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786089 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data-custom\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786136 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786237 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786274 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786318 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786376 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786434 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zvfl\" (UniqueName: \"kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786628 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvg26\" (UniqueName: \"kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.786665 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.792387 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data-custom\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.793388 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-config-data\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.793518 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.795101 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01c678f9-ada0-449b-b53e-d5831743585c-combined-ca-bundle\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.800780 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.801009 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.801912 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.802860 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.818850 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.824447 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcv9k\" (UniqueName: \"kubernetes.io/projected/01c678f9-ada0-449b-b53e-d5831743585c-kube-api-access-hcv9k\") pod \"heat-engine-b646bc85-sjc6z\" (UID: \"01c678f9-ada0-449b-b53e-d5831743585c\") " pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.826820 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvg26\" (UniqueName: \"kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26\") pod \"heat-cfnapi-6bff56f848-tmpm9\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.829356 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zvfl\" (UniqueName: \"kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl\") pod \"heat-api-848d9456b5-dbbfn\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.844157 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:12 crc kubenswrapper[4988]: I1123 08:25:12.869427 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.122878 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.410721 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.461336 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:13 crc kubenswrapper[4988]: W1123 08:25:13.470234 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb937708d_98b3_407e_9424_afa537bfa8d4.slice/crio-599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21 WatchSource:0}: Error finding container 599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21: Status 404 returned error can't find the container with id 599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21 Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.602400 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-b646bc85-sjc6z"] Nov 23 08:25:13 crc kubenswrapper[4988]: W1123 08:25:13.606257 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c678f9_ada0_449b_b53e_d5831743585c.slice/crio-3c03be650e4ccac7ea054a7304557fae6a5f7968b9026464839826933ab1b20a WatchSource:0}: Error finding container 3c03be650e4ccac7ea054a7304557fae6a5f7968b9026464839826933ab1b20a: Status 404 returned error can't find the container with id 3c03be650e4ccac7ea054a7304557fae6a5f7968b9026464839826933ab1b20a Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.734713 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.735130 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d85b8db74-6lnzm" podUID="10f03070-433b-4e4a-ba16-014a6c6a3094" containerName="heat-api" containerID="cri-o://724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d" gracePeriod=60 Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.753127 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-b646bc85-sjc6z" event={"ID":"01c678f9-ada0-449b-b53e-d5831743585c","Type":"ContainerStarted","Data":"3c03be650e4ccac7ea054a7304557fae6a5f7968b9026464839826933ab1b20a"} Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.768156 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" event={"ID":"94644089-ebbf-4e4e-aa96-de3526cdce8b","Type":"ContainerStarted","Data":"8693ec28642defdecc4b6596926d6995b00c0a5ab2422bd392de26ab375a9ed9"} Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.769381 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.776910 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.777150 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" podUID="8749f81d-53bd-4174-b21b-66ee8f64d330" containerName="heat-cfnapi" containerID="cri-o://7dc142a766d80f7632d597051ff2fe84b99c05a8e8a3f2c5ef9f3ef826ec060e" gracePeriod=60 Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.791718 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-568f945488-bz9kx"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.793641 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-848d9456b5-dbbfn" event={"ID":"b937708d-98b3-407e-9424-afa537bfa8d4","Type":"ContainerStarted","Data":"c18235c15f3da66274d6541fe0bab60a8c5250b890f42d238036fa04b4b9fae6"} Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.793840 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-848d9456b5-dbbfn" event={"ID":"b937708d-98b3-407e-9424-afa537bfa8d4","Type":"ContainerStarted","Data":"599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21"} Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.794025 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.794270 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.798443 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.798930 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.806524 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-568f945488-bz9kx"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.838007 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" podStartSLOduration=1.837976837 podStartE2EDuration="1.837976837s" podCreationTimestamp="2025-11-23 08:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:13.804583862 +0000 UTC m=+5966.113096645" watchObservedRunningTime="2025-11-23 08:25:13.837976837 +0000 UTC m=+5966.146489600" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.872827 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-79f6fbff8d-p2hvc"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.878874 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.879331 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-79f6fbff8d-p2hvc"] Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.887688 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.887857 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.922684 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-848d9456b5-dbbfn" podStartSLOduration=1.922669623 podStartE2EDuration="1.922669623s" podCreationTimestamp="2025-11-23 08:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:13.907091663 +0000 UTC m=+5966.215604426" watchObservedRunningTime="2025-11-23 08:25:13.922669623 +0000 UTC m=+5966.231182386" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.925869 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.925944 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-public-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.926005 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data-custom\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.926040 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-internal-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.926096 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-combined-ca-bundle\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:13 crc kubenswrapper[4988]: I1123 08:25:13.926157 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvd7\" (UniqueName: \"kubernetes.io/projected/24372f9b-b303-4136-bbe5-30ffd8b21823-kube-api-access-ghvd7\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.027906 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data-custom\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028176 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028211 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-public-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028246 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-public-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028267 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-internal-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028301 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data-custom\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028326 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-internal-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028358 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-combined-ca-bundle\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028381 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-combined-ca-bundle\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028409 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvd7\" (UniqueName: \"kubernetes.io/projected/24372f9b-b303-4136-bbe5-30ffd8b21823-kube-api-access-ghvd7\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028456 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8g65\" (UniqueName: \"kubernetes.io/projected/26bd0fe8-c472-4378-b583-87868be32419-kube-api-access-q8g65\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.028502 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.034905 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-internal-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.035418 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-public-tls-certs\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.035591 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data-custom\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.038594 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-config-data\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.040515 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24372f9b-b303-4136-bbe5-30ffd8b21823-combined-ca-bundle\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.045708 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvd7\" (UniqueName: \"kubernetes.io/projected/24372f9b-b303-4136-bbe5-30ffd8b21823-kube-api-access-ghvd7\") pod \"heat-api-568f945488-bz9kx\" (UID: \"24372f9b-b303-4136-bbe5-30ffd8b21823\") " pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.131542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8g65\" (UniqueName: \"kubernetes.io/projected/26bd0fe8-c472-4378-b583-87868be32419-kube-api-access-q8g65\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.131722 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.131772 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data-custom\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.131826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-public-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.131917 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-internal-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.132004 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-combined-ca-bundle\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.137723 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.138668 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-internal-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.139288 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.140862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-config-data-custom\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.141295 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-public-tls-certs\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.147022 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26bd0fe8-c472-4378-b583-87868be32419-combined-ca-bundle\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.164732 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8g65\" (UniqueName: \"kubernetes.io/projected/26bd0fe8-c472-4378-b583-87868be32419-kube-api-access-q8g65\") pod \"heat-cfnapi-79f6fbff8d-p2hvc\" (UID: \"26bd0fe8-c472-4378-b583-87868be32419\") " pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.230779 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.654252 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-568f945488-bz9kx"] Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.670543 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.789029 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-79f6fbff8d-p2hvc"] Nov 23 08:25:14 crc kubenswrapper[4988]: W1123 08:25:14.792582 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26bd0fe8_c472_4378_b583_87868be32419.slice/crio-cc65e5b347bda5f6b9021d691a7e4a2fc9fe7ccecb4886960dc5ad0db4d36371 WatchSource:0}: Error finding container cc65e5b347bda5f6b9021d691a7e4a2fc9fe7ccecb4886960dc5ad0db4d36371: Status 404 returned error can't find the container with id cc65e5b347bda5f6b9021d691a7e4a2fc9fe7ccecb4886960dc5ad0db4d36371 Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.814181 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-b646bc85-sjc6z" event={"ID":"01c678f9-ada0-449b-b53e-d5831743585c","Type":"ContainerStarted","Data":"55f72d988efd32e00ab03f639c537c29faaac5fa99f8b5f8322b555b01055540"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.814617 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.824951 4988 generic.go:334] "Generic (PLEG): container finished" podID="10f03070-433b-4e4a-ba16-014a6c6a3094" containerID="724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d" exitCode=0 Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.825108 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d85b8db74-6lnzm" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.825400 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d85b8db74-6lnzm" event={"ID":"10f03070-433b-4e4a-ba16-014a6c6a3094","Type":"ContainerDied","Data":"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.825430 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d85b8db74-6lnzm" event={"ID":"10f03070-433b-4e4a-ba16-014a6c6a3094","Type":"ContainerDied","Data":"9207efa9129117c095e30c44cd25f665a0d6a4a37d98bb58c5b6647165524c82"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.825447 4988 scope.go:117] "RemoveContainer" containerID="724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.830752 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-b646bc85-sjc6z" podStartSLOduration=2.8307394390000002 podStartE2EDuration="2.830739439s" podCreationTimestamp="2025-11-23 08:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:14.828462584 +0000 UTC m=+5967.136975347" watchObservedRunningTime="2025-11-23 08:25:14.830739439 +0000 UTC m=+5967.139252202" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.834663 4988 generic.go:334] "Generic (PLEG): container finished" podID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerID="d157692007b1dc71c3fefd31651f741958a96077816e438f2c495ab559e069ed" exitCode=1 Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.834754 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" event={"ID":"94644089-ebbf-4e4e-aa96-de3526cdce8b","Type":"ContainerDied","Data":"d157692007b1dc71c3fefd31651f741958a96077816e438f2c495ab559e069ed"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.835267 4988 scope.go:117] "RemoveContainer" containerID="d157692007b1dc71c3fefd31651f741958a96077816e438f2c495ab559e069ed" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.841379 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-568f945488-bz9kx" event={"ID":"24372f9b-b303-4136-bbe5-30ffd8b21823","Type":"ContainerStarted","Data":"f3a53b942b9ac1b7546e8b9bc71a13c49acc80e3d5ccd7c29a358f793f20e696"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.848923 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom\") pod \"10f03070-433b-4e4a-ba16-014a6c6a3094\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.848997 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle\") pod \"10f03070-433b-4e4a-ba16-014a6c6a3094\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.849052 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data\") pod \"10f03070-433b-4e4a-ba16-014a6c6a3094\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.849135 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxzr\" (UniqueName: \"kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr\") pod \"10f03070-433b-4e4a-ba16-014a6c6a3094\" (UID: \"10f03070-433b-4e4a-ba16-014a6c6a3094\") " Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.850388 4988 generic.go:334] "Generic (PLEG): container finished" podID="b937708d-98b3-407e-9424-afa537bfa8d4" containerID="c18235c15f3da66274d6541fe0bab60a8c5250b890f42d238036fa04b4b9fae6" exitCode=1 Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.850454 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-848d9456b5-dbbfn" event={"ID":"b937708d-98b3-407e-9424-afa537bfa8d4","Type":"ContainerDied","Data":"c18235c15f3da66274d6541fe0bab60a8c5250b890f42d238036fa04b4b9fae6"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.851485 4988 scope.go:117] "RemoveContainer" containerID="c18235c15f3da66274d6541fe0bab60a8c5250b890f42d238036fa04b4b9fae6" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.856461 4988 generic.go:334] "Generic (PLEG): container finished" podID="8749f81d-53bd-4174-b21b-66ee8f64d330" containerID="7dc142a766d80f7632d597051ff2fe84b99c05a8e8a3f2c5ef9f3ef826ec060e" exitCode=0 Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.856580 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" event={"ID":"8749f81d-53bd-4174-b21b-66ee8f64d330","Type":"ContainerDied","Data":"7dc142a766d80f7632d597051ff2fe84b99c05a8e8a3f2c5ef9f3ef826ec060e"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.857304 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10f03070-433b-4e4a-ba16-014a6c6a3094" (UID: "10f03070-433b-4e4a-ba16-014a6c6a3094"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.857481 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" event={"ID":"26bd0fe8-c472-4378-b583-87868be32419","Type":"ContainerStarted","Data":"cc65e5b347bda5f6b9021d691a7e4a2fc9fe7ccecb4886960dc5ad0db4d36371"} Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.865417 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr" (OuterVolumeSpecName: "kube-api-access-cvxzr") pod "10f03070-433b-4e4a-ba16-014a6c6a3094" (UID: "10f03070-433b-4e4a-ba16-014a6c6a3094"). InnerVolumeSpecName "kube-api-access-cvxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.875702 4988 scope.go:117] "RemoveContainer" containerID="724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d" Nov 23 08:25:14 crc kubenswrapper[4988]: E1123 08:25:14.878770 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d\": container with ID starting with 724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d not found: ID does not exist" containerID="724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.878803 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d"} err="failed to get container status \"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d\": rpc error: code = NotFound desc = could not find container \"724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d\": container with ID starting with 724fd7a8c2337c96740e17b672d0f430e726232f1eb239d6202dc70fcb11f47d not found: ID does not exist" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.905686 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.947318 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10f03070-433b-4e4a-ba16-014a6c6a3094" (UID: "10f03070-433b-4e4a-ba16-014a6c6a3094"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.966453 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxzr\" (UniqueName: \"kubernetes.io/projected/10f03070-433b-4e4a-ba16-014a6c6a3094-kube-api-access-cvxzr\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.966481 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.966491 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:14 crc kubenswrapper[4988]: I1123 08:25:14.992339 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data" (OuterVolumeSpecName: "config-data") pod "10f03070-433b-4e4a-ba16-014a6c6a3094" (UID: "10f03070-433b-4e4a-ba16-014a6c6a3094"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.070397 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr4lh\" (UniqueName: \"kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh\") pod \"8749f81d-53bd-4174-b21b-66ee8f64d330\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.070524 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom\") pod \"8749f81d-53bd-4174-b21b-66ee8f64d330\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.070632 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data\") pod \"8749f81d-53bd-4174-b21b-66ee8f64d330\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.070670 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle\") pod \"8749f81d-53bd-4174-b21b-66ee8f64d330\" (UID: \"8749f81d-53bd-4174-b21b-66ee8f64d330\") " Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.071283 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f03070-433b-4e4a-ba16-014a6c6a3094-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.074026 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8749f81d-53bd-4174-b21b-66ee8f64d330" (UID: "8749f81d-53bd-4174-b21b-66ee8f64d330"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.074521 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh" (OuterVolumeSpecName: "kube-api-access-vr4lh") pod "8749f81d-53bd-4174-b21b-66ee8f64d330" (UID: "8749f81d-53bd-4174-b21b-66ee8f64d330"). InnerVolumeSpecName "kube-api-access-vr4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.126776 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8749f81d-53bd-4174-b21b-66ee8f64d330" (UID: "8749f81d-53bd-4174-b21b-66ee8f64d330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.162778 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.172860 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.173182 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.173275 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr4lh\" (UniqueName: \"kubernetes.io/projected/8749f81d-53bd-4174-b21b-66ee8f64d330-kube-api-access-vr4lh\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.174396 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d85b8db74-6lnzm"] Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.201151 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data" (OuterVolumeSpecName: "config-data") pod "8749f81d-53bd-4174-b21b-66ee8f64d330" (UID: "8749f81d-53bd-4174-b21b-66ee8f64d330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.275531 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8749f81d-53bd-4174-b21b-66ee8f64d330-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.866376 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" event={"ID":"26bd0fe8-c472-4378-b583-87868be32419","Type":"ContainerStarted","Data":"8f4b66a7913e0201698c68ef7c7f554c890c660b7efe4332f27f00335752459b"} Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.866526 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.869006 4988 generic.go:334] "Generic (PLEG): container finished" podID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" exitCode=1 Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.869075 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" event={"ID":"94644089-ebbf-4e4e-aa96-de3526cdce8b","Type":"ContainerDied","Data":"343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20"} Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.869099 4988 scope.go:117] "RemoveContainer" containerID="d157692007b1dc71c3fefd31651f741958a96077816e438f2c495ab559e069ed" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.870422 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-568f945488-bz9kx" event={"ID":"24372f9b-b303-4136-bbe5-30ffd8b21823","Type":"ContainerStarted","Data":"2f3fed1bbb139c953ccca99c7258b17db87d61839f864921ec80a8ff7adfbef7"} Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.870445 4988 scope.go:117] "RemoveContainer" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.870522 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:15 crc kubenswrapper[4988]: E1123 08:25:15.870681 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bff56f848-tmpm9_openstack(94644089-ebbf-4e4e-aa96-de3526cdce8b)\"" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.872846 4988 generic.go:334] "Generic (PLEG): container finished" podID="b937708d-98b3-407e-9424-afa537bfa8d4" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" exitCode=1 Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.872875 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-848d9456b5-dbbfn" event={"ID":"b937708d-98b3-407e-9424-afa537bfa8d4","Type":"ContainerDied","Data":"c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118"} Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.873491 4988 scope.go:117] "RemoveContainer" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" Nov 23 08:25:15 crc kubenswrapper[4988]: E1123 08:25:15.873707 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-848d9456b5-dbbfn_openstack(b937708d-98b3-407e-9424-afa537bfa8d4)\"" pod="openstack/heat-api-848d9456b5-dbbfn" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.875030 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.876224 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7cc8c7674f-2pf2g" event={"ID":"8749f81d-53bd-4174-b21b-66ee8f64d330","Type":"ContainerDied","Data":"c2149ac851acb36563cf058cdf46cadeef890aec1b761eaae551843c3da0c424"} Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.898773 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" podStartSLOduration=2.898755656 podStartE2EDuration="2.898755656s" podCreationTimestamp="2025-11-23 08:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:15.887710687 +0000 UTC m=+5968.196223450" watchObservedRunningTime="2025-11-23 08:25:15.898755656 +0000 UTC m=+5968.207268419" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.934464 4988 scope.go:117] "RemoveContainer" containerID="c18235c15f3da66274d6541fe0bab60a8c5250b890f42d238036fa04b4b9fae6" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.958990 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-568f945488-bz9kx" podStartSLOduration=2.958952255 podStartE2EDuration="2.958952255s" podCreationTimestamp="2025-11-23 08:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:25:15.938645169 +0000 UTC m=+5968.247157932" watchObservedRunningTime="2025-11-23 08:25:15.958952255 +0000 UTC m=+5968.267465018" Nov 23 08:25:15 crc kubenswrapper[4988]: I1123 08:25:15.998358 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.005443 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7cc8c7674f-2pf2g"] Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.009541 4988 scope.go:117] "RemoveContainer" containerID="7dc142a766d80f7632d597051ff2fe84b99c05a8e8a3f2c5ef9f3ef826ec060e" Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.059333 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.506919 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f03070-433b-4e4a-ba16-014a6c6a3094" path="/var/lib/kubelet/pods/10f03070-433b-4e4a-ba16-014a6c6a3094/volumes" Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.507888 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8749f81d-53bd-4174-b21b-66ee8f64d330" path="/var/lib/kubelet/pods/8749f81d-53bd-4174-b21b-66ee8f64d330/volumes" Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.888545 4988 scope.go:117] "RemoveContainer" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" Nov 23 08:25:16 crc kubenswrapper[4988]: E1123 08:25:16.888830 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bff56f848-tmpm9_openstack(94644089-ebbf-4e4e-aa96-de3526cdce8b)\"" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" Nov 23 08:25:16 crc kubenswrapper[4988]: I1123 08:25:16.892118 4988 scope.go:117] "RemoveContainer" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" Nov 23 08:25:16 crc kubenswrapper[4988]: E1123 08:25:16.892390 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-848d9456b5-dbbfn_openstack(b937708d-98b3-407e-9424-afa537bfa8d4)\"" pod="openstack/heat-api-848d9456b5-dbbfn" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.845118 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.845206 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.870122 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.870173 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.900034 4988 scope.go:117] "RemoveContainer" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" Nov 23 08:25:17 crc kubenswrapper[4988]: I1123 08:25:17.900288 4988 scope.go:117] "RemoveContainer" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" Nov 23 08:25:17 crc kubenswrapper[4988]: E1123 08:25:17.900309 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-848d9456b5-dbbfn_openstack(b937708d-98b3-407e-9424-afa537bfa8d4)\"" pod="openstack/heat-api-848d9456b5-dbbfn" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" Nov 23 08:25:17 crc kubenswrapper[4988]: E1123 08:25:17.900500 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bff56f848-tmpm9_openstack(94644089-ebbf-4e4e-aa96-de3526cdce8b)\"" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" Nov 23 08:25:18 crc kubenswrapper[4988]: I1123 08:25:18.908863 4988 scope.go:117] "RemoveContainer" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" Nov 23 08:25:18 crc kubenswrapper[4988]: I1123 08:25:18.909136 4988 scope.go:117] "RemoveContainer" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" Nov 23 08:25:18 crc kubenswrapper[4988]: E1123 08:25:18.909136 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-848d9456b5-dbbfn_openstack(b937708d-98b3-407e-9424-afa537bfa8d4)\"" pod="openstack/heat-api-848d9456b5-dbbfn" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" Nov 23 08:25:18 crc kubenswrapper[4988]: E1123 08:25:18.909554 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-6bff56f848-tmpm9_openstack(94644089-ebbf-4e4e-aa96-de3526cdce8b)\"" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" Nov 23 08:25:19 crc kubenswrapper[4988]: I1123 08:25:19.411097 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.95:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.95:8443: connect: connection refused" Nov 23 08:25:20 crc kubenswrapper[4988]: I1123 08:25:20.461364 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-568f945488-bz9kx" Nov 23 08:25:20 crc kubenswrapper[4988]: I1123 08:25:20.547433 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:20 crc kubenswrapper[4988]: I1123 08:25:20.928224 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-848d9456b5-dbbfn" event={"ID":"b937708d-98b3-407e-9424-afa537bfa8d4","Type":"ContainerDied","Data":"599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21"} Nov 23 08:25:20 crc kubenswrapper[4988]: I1123 08:25:20.928454 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599f7cf2f82dc800cd0e5040d042eb25437da2e22b5628633bd3fcb5d5207d21" Nov 23 08:25:20 crc kubenswrapper[4988]: I1123 08:25:20.934583 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.102091 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data\") pod \"b937708d-98b3-407e-9424-afa537bfa8d4\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.102246 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zvfl\" (UniqueName: \"kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl\") pod \"b937708d-98b3-407e-9424-afa537bfa8d4\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.102303 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle\") pod \"b937708d-98b3-407e-9424-afa537bfa8d4\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.102375 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom\") pod \"b937708d-98b3-407e-9424-afa537bfa8d4\" (UID: \"b937708d-98b3-407e-9424-afa537bfa8d4\") " Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.107928 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl" (OuterVolumeSpecName: "kube-api-access-5zvfl") pod "b937708d-98b3-407e-9424-afa537bfa8d4" (UID: "b937708d-98b3-407e-9424-afa537bfa8d4"). InnerVolumeSpecName "kube-api-access-5zvfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.108235 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b937708d-98b3-407e-9424-afa537bfa8d4" (UID: "b937708d-98b3-407e-9424-afa537bfa8d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.142676 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b937708d-98b3-407e-9424-afa537bfa8d4" (UID: "b937708d-98b3-407e-9424-afa537bfa8d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.158603 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data" (OuterVolumeSpecName: "config-data") pod "b937708d-98b3-407e-9424-afa537bfa8d4" (UID: "b937708d-98b3-407e-9424-afa537bfa8d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.205225 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zvfl\" (UniqueName: \"kubernetes.io/projected/b937708d-98b3-407e-9424-afa537bfa8d4-kube-api-access-5zvfl\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.205261 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.205269 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.205280 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b937708d-98b3-407e-9424-afa537bfa8d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.937530 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-848d9456b5-dbbfn" Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.969616 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:21 crc kubenswrapper[4988]: I1123 08:25:21.978444 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-848d9456b5-dbbfn"] Nov 23 08:25:22 crc kubenswrapper[4988]: I1123 08:25:22.505461 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" path="/var/lib/kubelet/pods/b937708d-98b3-407e-9424-afa537bfa8d4/volumes" Nov 23 08:25:23 crc kubenswrapper[4988]: I1123 08:25:23.496928 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:25:23 crc kubenswrapper[4988]: E1123 08:25:23.497608 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:25:25 crc kubenswrapper[4988]: I1123 08:25:25.550088 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-79f6fbff8d-p2hvc" Nov 23 08:25:25 crc kubenswrapper[4988]: I1123 08:25:25.627151 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:25 crc kubenswrapper[4988]: I1123 08:25:25.981559 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" event={"ID":"94644089-ebbf-4e4e-aa96-de3526cdce8b","Type":"ContainerDied","Data":"8693ec28642defdecc4b6596926d6995b00c0a5ab2422bd392de26ab375a9ed9"} Nov 23 08:25:25 crc kubenswrapper[4988]: I1123 08:25:25.981603 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8693ec28642defdecc4b6596926d6995b00c0a5ab2422bd392de26ab375a9ed9" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.030976 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.039214 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-f2zbd"] Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.047823 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4926-account-create-qq7gk"] Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.060104 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-f2zbd"] Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.082977 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4926-account-create-qq7gk"] Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.120923 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle\") pod \"94644089-ebbf-4e4e-aa96-de3526cdce8b\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.121053 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom\") pod \"94644089-ebbf-4e4e-aa96-de3526cdce8b\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.121079 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data\") pod \"94644089-ebbf-4e4e-aa96-de3526cdce8b\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.121117 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvg26\" (UniqueName: \"kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26\") pod \"94644089-ebbf-4e4e-aa96-de3526cdce8b\" (UID: \"94644089-ebbf-4e4e-aa96-de3526cdce8b\") " Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.125960 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26" (OuterVolumeSpecName: "kube-api-access-tvg26") pod "94644089-ebbf-4e4e-aa96-de3526cdce8b" (UID: "94644089-ebbf-4e4e-aa96-de3526cdce8b"). InnerVolumeSpecName "kube-api-access-tvg26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.126218 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94644089-ebbf-4e4e-aa96-de3526cdce8b" (UID: "94644089-ebbf-4e4e-aa96-de3526cdce8b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.152991 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94644089-ebbf-4e4e-aa96-de3526cdce8b" (UID: "94644089-ebbf-4e4e-aa96-de3526cdce8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.175630 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data" (OuterVolumeSpecName: "config-data") pod "94644089-ebbf-4e4e-aa96-de3526cdce8b" (UID: "94644089-ebbf-4e4e-aa96-de3526cdce8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.223742 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.225739 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.225773 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvg26\" (UniqueName: \"kubernetes.io/projected/94644089-ebbf-4e4e-aa96-de3526cdce8b-kube-api-access-tvg26\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.225809 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94644089-ebbf-4e4e-aa96-de3526cdce8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.505848 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1861e644-6cb5-4be2-a63d-eb80cfb96dc0" path="/var/lib/kubelet/pods/1861e644-6cb5-4be2-a63d-eb80cfb96dc0/volumes" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.506434 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574ccc4f-ea1b-4a00-b4bf-63c826025a09" path="/var/lib/kubelet/pods/574ccc4f-ea1b-4a00-b4bf-63c826025a09/volumes" Nov 23 08:25:26 crc kubenswrapper[4988]: I1123 08:25:26.989435 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6bff56f848-tmpm9" Nov 23 08:25:27 crc kubenswrapper[4988]: I1123 08:25:27.012529 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:27 crc kubenswrapper[4988]: I1123 08:25:27.019894 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6bff56f848-tmpm9"] Nov 23 08:25:28 crc kubenswrapper[4988]: I1123 08:25:28.509918 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" path="/var/lib/kubelet/pods/94644089-ebbf-4e4e-aa96-de3526cdce8b/volumes" Nov 23 08:25:29 crc kubenswrapper[4988]: I1123 08:25:29.410656 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-79ccf4df9b-dnc54" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.95:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.95:8443: connect: connection refused" Nov 23 08:25:29 crc kubenswrapper[4988]: I1123 08:25:29.410802 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:25:33 crc kubenswrapper[4988]: I1123 08:25:33.161307 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-b646bc85-sjc6z" Nov 23 08:25:33 crc kubenswrapper[4988]: I1123 08:25:33.235990 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:33 crc kubenswrapper[4988]: I1123 08:25:33.236291 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerName="heat-engine" containerID="cri-o://27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" gracePeriod=60 Nov 23 08:25:35 crc kubenswrapper[4988]: I1123 08:25:35.030548 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b99ns"] Nov 23 08:25:35 crc kubenswrapper[4988]: I1123 08:25:35.045986 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b99ns"] Nov 23 08:25:36 crc kubenswrapper[4988]: E1123 08:25:36.042260 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 08:25:36 crc kubenswrapper[4988]: E1123 08:25:36.044906 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 08:25:36 crc kubenswrapper[4988]: E1123 08:25:36.046659 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 23 08:25:36 crc kubenswrapper[4988]: E1123 08:25:36.046834 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerName="heat-engine" Nov 23 08:25:36 crc kubenswrapper[4988]: I1123 08:25:36.516600 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc2308d-5918-4310-b702-ed5b3d581345" path="/var/lib/kubelet/pods/dcc2308d-5918-4310-b702-ed5b3d581345/volumes" Nov 23 08:25:37 crc kubenswrapper[4988]: I1123 08:25:37.497422 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:25:37 crc kubenswrapper[4988]: E1123 08:25:37.498393 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.154415 4988 generic.go:334] "Generic (PLEG): container finished" podID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerID="5eff7259b5e9b42064fc03cfc94c87f6a26c2210d84ca74820726d1fde757599" exitCode=137 Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.154633 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerDied","Data":"5eff7259b5e9b42064fc03cfc94c87f6a26c2210d84ca74820726d1fde757599"} Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.399050 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549708 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-689pl\" (UniqueName: \"kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549758 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549781 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549804 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549924 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549956 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.549996 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key\") pod \"fee15316-b0c1-4900-95fd-49110a4bab1a\" (UID: \"fee15316-b0c1-4900-95fd-49110a4bab1a\") " Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.552370 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs" (OuterVolumeSpecName: "logs") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.559469 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.559514 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl" (OuterVolumeSpecName: "kube-api-access-689pl") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "kube-api-access-689pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.583385 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.586004 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data" (OuterVolumeSpecName: "config-data") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.601494 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts" (OuterVolumeSpecName: "scripts") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652820 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652846 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652856 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-689pl\" (UniqueName: \"kubernetes.io/projected/fee15316-b0c1-4900-95fd-49110a4bab1a-kube-api-access-689pl\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652864 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652872 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fee15316-b0c1-4900-95fd-49110a4bab1a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.652880 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee15316-b0c1-4900-95fd-49110a4bab1a-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.671947 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fee15316-b0c1-4900-95fd-49110a4bab1a" (UID: "fee15316-b0c1-4900-95fd-49110a4bab1a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:39 crc kubenswrapper[4988]: I1123 08:25:39.754120 4988 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee15316-b0c1-4900-95fd-49110a4bab1a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.169417 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79ccf4df9b-dnc54" event={"ID":"fee15316-b0c1-4900-95fd-49110a4bab1a","Type":"ContainerDied","Data":"442ff9e3f69c0a228e8c66fc70126f25e7de60106d79f88d6342a0a285edc168"} Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.169494 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79ccf4df9b-dnc54" Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.169829 4988 scope.go:117] "RemoveContainer" containerID="2bc9b1ba3ee656da3d3db7f4806251c70cfa46b31562602dfc5b28713c09068b" Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.216238 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.244341 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79ccf4df9b-dnc54"] Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.399169 4988 scope.go:117] "RemoveContainer" containerID="5eff7259b5e9b42064fc03cfc94c87f6a26c2210d84ca74820726d1fde757599" Nov 23 08:25:40 crc kubenswrapper[4988]: I1123 08:25:40.512569 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" path="/var/lib/kubelet/pods/fee15316-b0c1-4900-95fd-49110a4bab1a/volumes" Nov 23 08:25:45 crc kubenswrapper[4988]: I1123 08:25:45.980389 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.091443 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle\") pod \"6c05dea4-b5f4-4951-8c2f-106c877369ea\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.091746 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom\") pod \"6c05dea4-b5f4-4951-8c2f-106c877369ea\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.091807 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data\") pod \"6c05dea4-b5f4-4951-8c2f-106c877369ea\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.091904 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-572sh\" (UniqueName: \"kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh\") pod \"6c05dea4-b5f4-4951-8c2f-106c877369ea\" (UID: \"6c05dea4-b5f4-4951-8c2f-106c877369ea\") " Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.098489 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c05dea4-b5f4-4951-8c2f-106c877369ea" (UID: "6c05dea4-b5f4-4951-8c2f-106c877369ea"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.098689 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh" (OuterVolumeSpecName: "kube-api-access-572sh") pod "6c05dea4-b5f4-4951-8c2f-106c877369ea" (UID: "6c05dea4-b5f4-4951-8c2f-106c877369ea"). InnerVolumeSpecName "kube-api-access-572sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.119771 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c05dea4-b5f4-4951-8c2f-106c877369ea" (UID: "6c05dea4-b5f4-4951-8c2f-106c877369ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.170855 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data" (OuterVolumeSpecName: "config-data") pod "6c05dea4-b5f4-4951-8c2f-106c877369ea" (UID: "6c05dea4-b5f4-4951-8c2f-106c877369ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.194313 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.194338 4988 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.194352 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c05dea4-b5f4-4951-8c2f-106c877369ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.194361 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-572sh\" (UniqueName: \"kubernetes.io/projected/6c05dea4-b5f4-4951-8c2f-106c877369ea-kube-api-access-572sh\") on node \"crc\" DevicePath \"\"" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.250631 4988 generic.go:334] "Generic (PLEG): container finished" podID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" exitCode=0 Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.250684 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.250693 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" event={"ID":"6c05dea4-b5f4-4951-8c2f-106c877369ea","Type":"ContainerDied","Data":"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a"} Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.251134 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-9dc8c4bfb-l8fb6" event={"ID":"6c05dea4-b5f4-4951-8c2f-106c877369ea","Type":"ContainerDied","Data":"dd627ec39f6babb5e5ea57d3a1a66f07d1791c35d6e237df877e9d4935661dbf"} Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.251164 4988 scope.go:117] "RemoveContainer" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.276818 4988 scope.go:117] "RemoveContainer" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" Nov 23 08:25:46 crc kubenswrapper[4988]: E1123 08:25:46.277431 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a\": container with ID starting with 27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a not found: ID does not exist" containerID="27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.277478 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a"} err="failed to get container status \"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a\": rpc error: code = NotFound desc = could not find container \"27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a\": container with ID starting with 27d2c6d14589a2000e22948b0ad95ff982ffafb2ccb21ea46de3281c795d768a not found: ID does not exist" Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.297743 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.304363 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-9dc8c4bfb-l8fb6"] Nov 23 08:25:46 crc kubenswrapper[4988]: I1123 08:25:46.508084 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" path="/var/lib/kubelet/pods/6c05dea4-b5f4-4951-8c2f-106c877369ea/volumes" Nov 23 08:25:50 crc kubenswrapper[4988]: I1123 08:25:50.496743 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:25:50 crc kubenswrapper[4988]: E1123 08:25:50.497413 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.013690 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr"] Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017108 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerName="heat-engine" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017123 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerName="heat-engine" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017149 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon-log" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017168 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon-log" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017181 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8749f81d-53bd-4174-b21b-66ee8f64d330" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017374 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="8749f81d-53bd-4174-b21b-66ee8f64d330" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017384 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017390 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017401 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017407 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017419 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f03070-433b-4e4a-ba16-014a6c6a3094" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017424 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f03070-433b-4e4a-ba16-014a6c6a3094" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017448 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017455 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017468 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017474 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" Nov 23 08:25:56 crc kubenswrapper[4988]: E1123 08:25:56.017485 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017490 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017709 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="8749f81d-53bd-4174-b21b-66ee8f64d330" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017729 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017742 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017765 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="94644089-ebbf-4e4e-aa96-de3526cdce8b" containerName="heat-cfnapi" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017773 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c05dea4-b5f4-4951-8c2f-106c877369ea" containerName="heat-engine" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017780 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017791 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f03070-433b-4e4a-ba16-014a6c6a3094" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017800 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="b937708d-98b3-407e-9424-afa537bfa8d4" containerName="heat-api" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.017812 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee15316-b0c1-4900-95fd-49110a4bab1a" containerName="horizon-log" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.019388 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.021525 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.026077 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr"] Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.105024 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxs9\" (UniqueName: \"kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.105486 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.105648 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.208650 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.208761 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.208865 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxs9\" (UniqueName: \"kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.209304 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.209682 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.239348 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxs9\" (UniqueName: \"kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.346692 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:25:56 crc kubenswrapper[4988]: I1123 08:25:56.689933 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr"] Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.371543 4988 generic.go:334] "Generic (PLEG): container finished" podID="504adff5-b08d-4279-8853-26c0b3847e79" containerID="9419e83e7c47fc0663b1d05bc41943af6837e3bf0137b129f7376ac73d2c749a" exitCode=0 Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.371949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" event={"ID":"504adff5-b08d-4279-8853-26c0b3847e79","Type":"ContainerDied","Data":"9419e83e7c47fc0663b1d05bc41943af6837e3bf0137b129f7376ac73d2c749a"} Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.371987 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" event={"ID":"504adff5-b08d-4279-8853-26c0b3847e79","Type":"ContainerStarted","Data":"effe28f5aaff323d9fb3f01a6583c09437a1d8f8c8be3267eea5086b745c5784"} Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.669093 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.681001 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.688133 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.844447 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.844791 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.844844 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ttj\" (UniqueName: \"kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.946658 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.946757 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.946780 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ttj\" (UniqueName: \"kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.947499 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.947892 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:57 crc kubenswrapper[4988]: I1123 08:25:57.978889 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ttj\" (UniqueName: \"kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj\") pod \"redhat-operators-pm8zt\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:58 crc kubenswrapper[4988]: I1123 08:25:58.018688 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:25:58 crc kubenswrapper[4988]: I1123 08:25:58.486174 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:25:58 crc kubenswrapper[4988]: W1123 08:25:58.574415 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd678109b_3e4e_441e_aa10_d63b5c435418.slice/crio-018f53fae3e001073147ae2459d823c0340e6689287247aeaac6cef4ea96e276 WatchSource:0}: Error finding container 018f53fae3e001073147ae2459d823c0340e6689287247aeaac6cef4ea96e276: Status 404 returned error can't find the container with id 018f53fae3e001073147ae2459d823c0340e6689287247aeaac6cef4ea96e276 Nov 23 08:25:59 crc kubenswrapper[4988]: I1123 08:25:59.403398 4988 generic.go:334] "Generic (PLEG): container finished" podID="d678109b-3e4e-441e-aa10-d63b5c435418" containerID="22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747" exitCode=0 Nov 23 08:25:59 crc kubenswrapper[4988]: I1123 08:25:59.403520 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerDied","Data":"22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747"} Nov 23 08:25:59 crc kubenswrapper[4988]: I1123 08:25:59.403847 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerStarted","Data":"018f53fae3e001073147ae2459d823c0340e6689287247aeaac6cef4ea96e276"} Nov 23 08:25:59 crc kubenswrapper[4988]: I1123 08:25:59.409788 4988 generic.go:334] "Generic (PLEG): container finished" podID="504adff5-b08d-4279-8853-26c0b3847e79" containerID="ddd049160766dedc2abb2e2d08707e0a17eb215a6915c852480dcf1e11074aa7" exitCode=0 Nov 23 08:25:59 crc kubenswrapper[4988]: I1123 08:25:59.409826 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" event={"ID":"504adff5-b08d-4279-8853-26c0b3847e79","Type":"ContainerDied","Data":"ddd049160766dedc2abb2e2d08707e0a17eb215a6915c852480dcf1e11074aa7"} Nov 23 08:26:00 crc kubenswrapper[4988]: I1123 08:26:00.136538 4988 scope.go:117] "RemoveContainer" containerID="672ea2b919cbb690ee278ac7e814e8d3f244bfbdea3b4c25fedcd9bdd7b959a4" Nov 23 08:26:00 crc kubenswrapper[4988]: I1123 08:26:00.253782 4988 scope.go:117] "RemoveContainer" containerID="ecce92f3dbdb805f444c97e95c4868f9b34038297615d444909351568a795747" Nov 23 08:26:00 crc kubenswrapper[4988]: I1123 08:26:00.350102 4988 scope.go:117] "RemoveContainer" containerID="fb1098e99cc61b4f1cf86429d9a772f38207c1b70424bd9990431600c0788227" Nov 23 08:26:00 crc kubenswrapper[4988]: I1123 08:26:00.431254 4988 generic.go:334] "Generic (PLEG): container finished" podID="504adff5-b08d-4279-8853-26c0b3847e79" containerID="b29bfc2774046da841426eb4c135b1d060b26a490bf4438b5be054a2016eefb5" exitCode=0 Nov 23 08:26:00 crc kubenswrapper[4988]: I1123 08:26:00.431300 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" event={"ID":"504adff5-b08d-4279-8853-26c0b3847e79","Type":"ContainerDied","Data":"b29bfc2774046da841426eb4c135b1d060b26a490bf4438b5be054a2016eefb5"} Nov 23 08:26:01 crc kubenswrapper[4988]: I1123 08:26:01.441585 4988 generic.go:334] "Generic (PLEG): container finished" podID="d678109b-3e4e-441e-aa10-d63b5c435418" containerID="d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe" exitCode=0 Nov 23 08:26:01 crc kubenswrapper[4988]: I1123 08:26:01.441678 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerDied","Data":"d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe"} Nov 23 08:26:01 crc kubenswrapper[4988]: I1123 08:26:01.496494 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:26:01 crc kubenswrapper[4988]: E1123 08:26:01.497013 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:26:01 crc kubenswrapper[4988]: I1123 08:26:01.926598 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.027138 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle\") pod \"504adff5-b08d-4279-8853-26c0b3847e79\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.027449 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util\") pod \"504adff5-b08d-4279-8853-26c0b3847e79\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.027514 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxs9\" (UniqueName: \"kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9\") pod \"504adff5-b08d-4279-8853-26c0b3847e79\" (UID: \"504adff5-b08d-4279-8853-26c0b3847e79\") " Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.032295 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util" (OuterVolumeSpecName: "util") pod "504adff5-b08d-4279-8853-26c0b3847e79" (UID: "504adff5-b08d-4279-8853-26c0b3847e79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.033396 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle" (OuterVolumeSpecName: "bundle") pod "504adff5-b08d-4279-8853-26c0b3847e79" (UID: "504adff5-b08d-4279-8853-26c0b3847e79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.033689 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9" (OuterVolumeSpecName: "kube-api-access-bhxs9") pod "504adff5-b08d-4279-8853-26c0b3847e79" (UID: "504adff5-b08d-4279-8853-26c0b3847e79"). InnerVolumeSpecName "kube-api-access-bhxs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.130130 4988 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.130576 4988 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/504adff5-b08d-4279-8853-26c0b3847e79-util\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.130598 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxs9\" (UniqueName: \"kubernetes.io/projected/504adff5-b08d-4279-8853-26c0b3847e79-kube-api-access-bhxs9\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.459729 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" event={"ID":"504adff5-b08d-4279-8853-26c0b3847e79","Type":"ContainerDied","Data":"effe28f5aaff323d9fb3f01a6583c09437a1d8f8c8be3267eea5086b745c5784"} Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.459775 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effe28f5aaff323d9fb3f01a6583c09437a1d8f8c8be3267eea5086b745c5784" Nov 23 08:26:02 crc kubenswrapper[4988]: I1123 08:26:02.459789 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr" Nov 23 08:26:04 crc kubenswrapper[4988]: I1123 08:26:04.488451 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerStarted","Data":"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8"} Nov 23 08:26:04 crc kubenswrapper[4988]: I1123 08:26:04.507699 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pm8zt" podStartSLOduration=2.721093164 podStartE2EDuration="7.507655808s" podCreationTimestamp="2025-11-23 08:25:57 +0000 UTC" firstStartedPulling="2025-11-23 08:25:59.406500998 +0000 UTC m=+6011.715013801" lastFinishedPulling="2025-11-23 08:26:04.193063672 +0000 UTC m=+6016.501576445" observedRunningTime="2025-11-23 08:26:04.504616263 +0000 UTC m=+6016.813129046" watchObservedRunningTime="2025-11-23 08:26:04.507655808 +0000 UTC m=+6016.816168581" Nov 23 08:26:08 crc kubenswrapper[4988]: I1123 08:26:08.019441 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:08 crc kubenswrapper[4988]: I1123 08:26:08.020077 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:09 crc kubenswrapper[4988]: I1123 08:26:09.105730 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm8zt" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" probeResult="failure" output=< Nov 23 08:26:09 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:26:09 crc kubenswrapper[4988]: > Nov 23 08:26:12 crc kubenswrapper[4988]: I1123 08:26:12.500008 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:26:12 crc kubenswrapper[4988]: E1123 08:26:12.500807 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.738295 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp"] Nov 23 08:26:13 crc kubenswrapper[4988]: E1123 08:26:13.739588 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="pull" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.739613 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="pull" Nov 23 08:26:13 crc kubenswrapper[4988]: E1123 08:26:13.739639 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="extract" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.739645 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="extract" Nov 23 08:26:13 crc kubenswrapper[4988]: E1123 08:26:13.739679 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="util" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.739686 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="util" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.741031 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="504adff5-b08d-4279-8853-26c0b3847e79" containerName="extract" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.742428 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.770556 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.770580 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.771502 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7sgqz" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.781769 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp"] Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.792663 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfcfn\" (UniqueName: \"kubernetes.io/projected/e2904ef0-72de-43d7-8d49-b1f8e828cd51-kube-api-access-rfcfn\") pod \"obo-prometheus-operator-668cf9dfbb-fqkjp\" (UID: \"e2904ef0-72de-43d7-8d49-b1f8e828cd51\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.863808 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7"] Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.865521 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.867100 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.870410 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rs4gf" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.883203 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7"] Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.891390 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd"] Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.892615 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.893953 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfcfn\" (UniqueName: \"kubernetes.io/projected/e2904ef0-72de-43d7-8d49-b1f8e828cd51-kube-api-access-rfcfn\") pod \"obo-prometheus-operator-668cf9dfbb-fqkjp\" (UID: \"e2904ef0-72de-43d7-8d49-b1f8e828cd51\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.911944 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd"] Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.916117 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfcfn\" (UniqueName: \"kubernetes.io/projected/e2904ef0-72de-43d7-8d49-b1f8e828cd51-kube-api-access-rfcfn\") pod \"obo-prometheus-operator-668cf9dfbb-fqkjp\" (UID: \"e2904ef0-72de-43d7-8d49-b1f8e828cd51\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.996652 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.996930 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.997074 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:13 crc kubenswrapper[4988]: I1123 08:26:13.997202 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.066564 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dlwlq"] Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.067942 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.073043 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.073249 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-68r5b" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.106771 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.107370 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.107709 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.107774 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.107858 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.118490 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.119438 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bdef66a9-8c6f-4b29-a47b-f46a4694696b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd\" (UID: \"bdef66a9-8c6f-4b29-a47b-f46a4694696b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.121051 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dlwlq"] Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.122778 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.137015 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/56cbbfcd-be09-45ab-97f1-eb8003e190d7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7\" (UID: \"56cbbfcd-be09-45ab-97f1-eb8003e190d7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.196707 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.209155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcq2\" (UniqueName: \"kubernetes.io/projected/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-kube-api-access-lwcq2\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.209901 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.219744 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.298326 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgshk"] Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.299697 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.302529 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-cbft5" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.311536 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.311664 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwcq2\" (UniqueName: \"kubernetes.io/projected/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-kube-api-access-lwcq2\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.322870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.342626 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwcq2\" (UniqueName: \"kubernetes.io/projected/5e20fbfc-63e7-4be1-92aa-f4f8d724b112-kube-api-access-lwcq2\") pod \"observability-operator-d8bb48f5d-dlwlq\" (UID: \"5e20fbfc-63e7-4be1-92aa-f4f8d724b112\") " pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.354569 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgshk"] Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.404655 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.414600 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cebba8f4-982d-470a-bef5-05e27811a64b-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.414712 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twr4\" (UniqueName: \"kubernetes.io/projected/cebba8f4-982d-470a-bef5-05e27811a64b-kube-api-access-5twr4\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.516801 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twr4\" (UniqueName: \"kubernetes.io/projected/cebba8f4-982d-470a-bef5-05e27811a64b-kube-api-access-5twr4\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.517624 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cebba8f4-982d-470a-bef5-05e27811a64b-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.519284 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/cebba8f4-982d-470a-bef5-05e27811a64b-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.541172 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twr4\" (UniqueName: \"kubernetes.io/projected/cebba8f4-982d-470a-bef5-05e27811a64b-kube-api-access-5twr4\") pod \"perses-operator-5446b9c989-qgshk\" (UID: \"cebba8f4-982d-470a-bef5-05e27811a64b\") " pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.581133 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp"] Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.671814 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.850498 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7"] Nov 23 08:26:14 crc kubenswrapper[4988]: W1123 08:26:14.858344 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56cbbfcd_be09_45ab_97f1_eb8003e190d7.slice/crio-a6d369345aed622fbfac029ccbfc2131e8f5131919e3fbc44987d7f72b930e6b WatchSource:0}: Error finding container a6d369345aed622fbfac029ccbfc2131e8f5131919e3fbc44987d7f72b930e6b: Status 404 returned error can't find the container with id a6d369345aed622fbfac029ccbfc2131e8f5131919e3fbc44987d7f72b930e6b Nov 23 08:26:14 crc kubenswrapper[4988]: W1123 08:26:14.889036 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdef66a9_8c6f_4b29_a47b_f46a4694696b.slice/crio-d91a6aad9235da5aaf5ae9c941832367f0da63251525a2274311343335551fa8 WatchSource:0}: Error finding container d91a6aad9235da5aaf5ae9c941832367f0da63251525a2274311343335551fa8: Status 404 returned error can't find the container with id d91a6aad9235da5aaf5ae9c941832367f0da63251525a2274311343335551fa8 Nov 23 08:26:14 crc kubenswrapper[4988]: I1123 08:26:14.891144 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd"] Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.011781 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-dlwlq"] Nov 23 08:26:15 crc kubenswrapper[4988]: W1123 08:26:15.268412 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcebba8f4_982d_470a_bef5_05e27811a64b.slice/crio-a4c326bb8e45c796f75f5f83ce5e46c52119f211c9f0707933de5d291e017c21 WatchSource:0}: Error finding container a4c326bb8e45c796f75f5f83ce5e46c52119f211c9f0707933de5d291e017c21: Status 404 returned error can't find the container with id a4c326bb8e45c796f75f5f83ce5e46c52119f211c9f0707933de5d291e017c21 Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.269096 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgshk"] Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.606013 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" event={"ID":"e2904ef0-72de-43d7-8d49-b1f8e828cd51","Type":"ContainerStarted","Data":"edc8eee0903833013a86065f6269ee7885cfe66e1ccf4b1b9f273c9ab71ebfa2"} Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.623491 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" event={"ID":"bdef66a9-8c6f-4b29-a47b-f46a4694696b","Type":"ContainerStarted","Data":"d91a6aad9235da5aaf5ae9c941832367f0da63251525a2274311343335551fa8"} Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.625128 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" event={"ID":"5e20fbfc-63e7-4be1-92aa-f4f8d724b112","Type":"ContainerStarted","Data":"2eef1262be729680fbb5c63fef1eb133ccaa753e7d710786d32b680e7dd87c2b"} Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.626765 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qgshk" event={"ID":"cebba8f4-982d-470a-bef5-05e27811a64b","Type":"ContainerStarted","Data":"a4c326bb8e45c796f75f5f83ce5e46c52119f211c9f0707933de5d291e017c21"} Nov 23 08:26:15 crc kubenswrapper[4988]: I1123 08:26:15.628466 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" event={"ID":"56cbbfcd-be09-45ab-97f1-eb8003e190d7","Type":"ContainerStarted","Data":"a6d369345aed622fbfac029ccbfc2131e8f5131919e3fbc44987d7f72b930e6b"} Nov 23 08:26:19 crc kubenswrapper[4988]: I1123 08:26:19.082412 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm8zt" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" probeResult="failure" output=< Nov 23 08:26:19 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:26:19 crc kubenswrapper[4988]: > Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.497452 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:26:23 crc kubenswrapper[4988]: E1123 08:26:23.498393 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.746919 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" event={"ID":"5e20fbfc-63e7-4be1-92aa-f4f8d724b112","Type":"ContainerStarted","Data":"c1e4666b0dec398772bf57876ae3f4ba1c70288f884f2be991c1229db616dd2a"} Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.747404 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.748411 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qgshk" event={"ID":"cebba8f4-982d-470a-bef5-05e27811a64b","Type":"ContainerStarted","Data":"1113a57424f472e4e59828452f603c27e8ba25cc90897e67bdeebd453883235b"} Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.749180 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.751421 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" event={"ID":"56cbbfcd-be09-45ab-97f1-eb8003e190d7","Type":"ContainerStarted","Data":"5771cbfa9a1e6641e8d26cf0887c88889ccf8cb3224591bee5a0e319963d7bd6"} Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.753065 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" event={"ID":"e2904ef0-72de-43d7-8d49-b1f8e828cd51","Type":"ContainerStarted","Data":"ea8559d573bf455aac354f6d14c75dacb0c3a02ad918865ecf48043b666af944"} Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.753255 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.754617 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" event={"ID":"bdef66a9-8c6f-4b29-a47b-f46a4694696b","Type":"ContainerStarted","Data":"e62633fb8166bcef9c94ac943c5a3eea5b5e82508f6a21282e5bbb65381ae7a4"} Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.783492 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-dlwlq" podStartSLOduration=2.081113419 podStartE2EDuration="9.783476634s" podCreationTimestamp="2025-11-23 08:26:14 +0000 UTC" firstStartedPulling="2025-11-23 08:26:15.02779283 +0000 UTC m=+6027.336305603" lastFinishedPulling="2025-11-23 08:26:22.730156055 +0000 UTC m=+6035.038668818" observedRunningTime="2025-11-23 08:26:23.778069472 +0000 UTC m=+6036.086582245" watchObservedRunningTime="2025-11-23 08:26:23.783476634 +0000 UTC m=+6036.091989397" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.812736 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7" podStartSLOduration=3.099661111 podStartE2EDuration="10.812710587s" podCreationTimestamp="2025-11-23 08:26:13 +0000 UTC" firstStartedPulling="2025-11-23 08:26:14.860932449 +0000 UTC m=+6027.169445212" lastFinishedPulling="2025-11-23 08:26:22.573981925 +0000 UTC m=+6034.882494688" observedRunningTime="2025-11-23 08:26:23.806062325 +0000 UTC m=+6036.114575088" watchObservedRunningTime="2025-11-23 08:26:23.812710587 +0000 UTC m=+6036.121223350" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.867227 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd" podStartSLOduration=3.184790609 podStartE2EDuration="10.867178986s" podCreationTimestamp="2025-11-23 08:26:13 +0000 UTC" firstStartedPulling="2025-11-23 08:26:14.892429168 +0000 UTC m=+6027.200941931" lastFinishedPulling="2025-11-23 08:26:22.574817555 +0000 UTC m=+6034.883330308" observedRunningTime="2025-11-23 08:26:23.860036842 +0000 UTC m=+6036.168549615" watchObservedRunningTime="2025-11-23 08:26:23.867178986 +0000 UTC m=+6036.175691749" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.879946 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fqkjp" podStartSLOduration=2.898015042 podStartE2EDuration="10.879929657s" podCreationTimestamp="2025-11-23 08:26:13 +0000 UTC" firstStartedPulling="2025-11-23 08:26:14.592039929 +0000 UTC m=+6026.900552692" lastFinishedPulling="2025-11-23 08:26:22.573954554 +0000 UTC m=+6034.882467307" observedRunningTime="2025-11-23 08:26:23.87759148 +0000 UTC m=+6036.186104243" watchObservedRunningTime="2025-11-23 08:26:23.879929657 +0000 UTC m=+6036.188442420" Nov 23 08:26:23 crc kubenswrapper[4988]: I1123 08:26:23.895216 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-qgshk" podStartSLOduration=2.509330887 podStartE2EDuration="9.895180359s" podCreationTimestamp="2025-11-23 08:26:14 +0000 UTC" firstStartedPulling="2025-11-23 08:26:15.278278952 +0000 UTC m=+6027.586791715" lastFinishedPulling="2025-11-23 08:26:22.664128404 +0000 UTC m=+6034.972641187" observedRunningTime="2025-11-23 08:26:23.891616402 +0000 UTC m=+6036.200129165" watchObservedRunningTime="2025-11-23 08:26:23.895180359 +0000 UTC m=+6036.203693122" Nov 23 08:26:29 crc kubenswrapper[4988]: I1123 08:26:29.084080 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm8zt" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" probeResult="failure" output=< Nov 23 08:26:29 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:26:29 crc kubenswrapper[4988]: > Nov 23 08:26:34 crc kubenswrapper[4988]: I1123 08:26:34.675557 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-qgshk" Nov 23 08:26:35 crc kubenswrapper[4988]: I1123 08:26:35.496854 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:26:35 crc kubenswrapper[4988]: E1123 08:26:35.497147 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.373061 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.373542 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="5adb072b-636b-4019-8dfe-44f8e8b27439" containerName="openstackclient" containerID="cri-o://76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045" gracePeriod=2 Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.417348 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.487300 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: E1123 08:26:37.487743 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5adb072b-636b-4019-8dfe-44f8e8b27439" containerName="openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.487765 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5adb072b-636b-4019-8dfe-44f8e8b27439" containerName="openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.488019 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5adb072b-636b-4019-8dfe-44f8e8b27439" containerName="openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.488738 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.499368 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.519223 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxf9\" (UniqueName: \"kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.519301 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.519360 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.519407 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.574248 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: E1123 08:26:37.575223 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-7bxf9 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.576352 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.586946 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.588264 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.619592 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5adb072b-636b-4019-8dfe-44f8e8b27439" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620770 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfvn\" (UniqueName: \"kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620819 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxf9\" (UniqueName: \"kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620849 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620891 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620916 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620941 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.620967 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.621015 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.639261 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:37 crc kubenswrapper[4988]: E1123 08:26:37.644165 4988 projected.go:194] Error preparing data for projected volume kube-api-access-7bxf9 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f) does not match the UID in record. The object might have been deleted and then recreated Nov 23 08:26:37 crc kubenswrapper[4988]: E1123 08:26:37.644268 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9 podName:7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f nodeName:}" failed. No retries permitted until 2025-11-23 08:26:38.144247553 +0000 UTC m=+6050.452760316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7bxf9" (UniqueName: "kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9") pod "openstackclient" (UID: "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f) does not match the UID in record. The object might have been deleted and then recreated Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.645153 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.648930 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.681064 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.697304 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.698579 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.718051 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-c7swt" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.724338 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.724393 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.724443 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqz9\" (UniqueName: \"kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9\") pod \"kube-state-metrics-0\" (UID: \"1a64aea4-b7f0-4a4f-971c-3312892fe956\") " pod="openstack/kube-state-metrics-0" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.724467 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.724531 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szfvn\" (UniqueName: \"kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.735647 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.737937 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.744856 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.745284 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szfvn\" (UniqueName: \"kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.783523 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " pod="openstack/openstackclient" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.829464 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqz9\" (UniqueName: \"kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9\") pod \"kube-state-metrics-0\" (UID: \"1a64aea4-b7f0-4a4f-971c-3312892fe956\") " pod="openstack/kube-state-metrics-0" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.898845 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqz9\" (UniqueName: \"kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9\") pod \"kube-state-metrics-0\" (UID: \"1a64aea4-b7f0-4a4f-971c-3312892fe956\") " pod="openstack/kube-state-metrics-0" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.927750 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:26:37 crc kubenswrapper[4988]: I1123 08:26:37.947246 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.033691 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.040506 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.070306 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.152409 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxf9\" (UniqueName: \"kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9\") pod \"openstackclient\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " pod="openstack/openstackclient" Nov 23 08:26:38 crc kubenswrapper[4988]: E1123 08:26:38.154893 4988 projected.go:194] Error preparing data for projected volume kube-api-access-7bxf9 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f) does not match the UID in record. The object might have been deleted and then recreated Nov 23 08:26:38 crc kubenswrapper[4988]: E1123 08:26:38.155073 4988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9 podName:7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f nodeName:}" failed. No retries permitted until 2025-11-23 08:26:39.155056376 +0000 UTC m=+6051.463569139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7bxf9" (UniqueName: "kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9") pod "openstackclient" (UID: "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f) does not match the UID in record. The object might have been deleted and then recreated Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.255240 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config\") pod \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.255309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle\") pod \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.255910 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret\") pod \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\" (UID: \"7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f\") " Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.256401 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bxf9\" (UniqueName: \"kubernetes.io/projected/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-kube-api-access-7bxf9\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.263602 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" (UID: "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.275009 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" (UID: "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.275058 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" (UID: "7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.358379 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.358641 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.358650 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.532849 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" path="/var/lib/kubelet/pods/7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f/volumes" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.539947 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.548723 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.551569 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.551626 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.551894 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.552504 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.552529 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-nr89j" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.584082 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667377 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667423 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667459 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667568 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667601 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667653 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrhs\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-kube-api-access-vtrhs\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.667676 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771412 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771471 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771505 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771587 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771613 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771651 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtrhs\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-kube-api-access-vtrhs\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.771671 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.776638 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.786140 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.787810 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.787940 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45368a7a-7b66-4d55-a8a7-a306d69e6858-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.788277 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.793350 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/45368a7a-7b66-4d55-a8a7-a306d69e6858-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.806120 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtrhs\" (UniqueName: \"kubernetes.io/projected/45368a7a-7b66-4d55-a8a7-a306d69e6858-kube-api-access-vtrhs\") pod \"alertmanager-metric-storage-0\" (UID: \"45368a7a-7b66-4d55-a8a7-a306d69e6858\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.907725 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.908785 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:26:38 crc kubenswrapper[4988]: I1123 08:26:38.977299 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.091639 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pm8zt" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" probeResult="failure" output=< Nov 23 08:26:39 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:26:39 crc kubenswrapper[4988]: > Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.101546 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a64aea4-b7f0-4a4f-971c-3312892fe956","Type":"ContainerStarted","Data":"368b597d77323f807feee6c0f725fbd4aa28e65bf95ce8dbd9001b6785237233"} Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.126150 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.126245 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ba033bc0-567a-4098-bb94-500843c4f18b","Type":"ContainerStarted","Data":"397dad5b7f39389a4a53dddf8f9ac5ce800419600ddb296588b51f5ef0e9dead"} Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.165660 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7f929ef8-b1ec-4a9a-9a24-34b1bb02b45f" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.205422 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.207871 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.232132 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.232321 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.232595 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.240483 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.240647 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.240764 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7tdtl" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.287998 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288050 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288106 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288146 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288226 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288248 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288276 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp28b\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.288327 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.354256 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391371 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391448 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391529 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391574 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp28b\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391624 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.391676 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.408554 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.416852 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.422240 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.447861 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.451910 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.453864 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.465920 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp28b\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.515251 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.515285 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d4023467469c5ca15534b89f41b46ca542b0822c9f59a71a8a2710844d21373/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.737343 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.858811 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.883353 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.893511 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5adb072b-636b-4019-8dfe-44f8e8b27439" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.907533 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config\") pod \"5adb072b-636b-4019-8dfe-44f8e8b27439\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.907652 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle\") pod \"5adb072b-636b-4019-8dfe-44f8e8b27439\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.907699 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret\") pod \"5adb072b-636b-4019-8dfe-44f8e8b27439\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.907735 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7fnv\" (UniqueName: \"kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv\") pod \"5adb072b-636b-4019-8dfe-44f8e8b27439\" (UID: \"5adb072b-636b-4019-8dfe-44f8e8b27439\") " Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.913441 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv" (OuterVolumeSpecName: "kube-api-access-f7fnv") pod "5adb072b-636b-4019-8dfe-44f8e8b27439" (UID: "5adb072b-636b-4019-8dfe-44f8e8b27439"). InnerVolumeSpecName "kube-api-access-f7fnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:26:39 crc kubenswrapper[4988]: I1123 08:26:39.925714 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.010497 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7fnv\" (UniqueName: \"kubernetes.io/projected/5adb072b-636b-4019-8dfe-44f8e8b27439-kube-api-access-f7fnv\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.076873 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5adb072b-636b-4019-8dfe-44f8e8b27439" (UID: "5adb072b-636b-4019-8dfe-44f8e8b27439"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.096049 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5adb072b-636b-4019-8dfe-44f8e8b27439" (UID: "5adb072b-636b-4019-8dfe-44f8e8b27439"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.113465 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.113498 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.125368 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5adb072b-636b-4019-8dfe-44f8e8b27439" (UID: "5adb072b-636b-4019-8dfe-44f8e8b27439"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.140566 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45368a7a-7b66-4d55-a8a7-a306d69e6858","Type":"ContainerStarted","Data":"dcf7a3c7d2c1958f8905470536f7dd9afcddd0014303cb6d7280a5d39f3f4277"} Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.170750 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ba033bc0-567a-4098-bb94-500843c4f18b","Type":"ContainerStarted","Data":"8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded"} Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.173157 4988 generic.go:334] "Generic (PLEG): container finished" podID="5adb072b-636b-4019-8dfe-44f8e8b27439" containerID="76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045" exitCode=137 Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.173259 4988 scope.go:117] "RemoveContainer" containerID="76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.173329 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.179818 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.192605 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5adb072b-636b-4019-8dfe-44f8e8b27439" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.192762 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.192746793 podStartE2EDuration="3.192746793s" podCreationTimestamp="2025-11-23 08:26:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:26:40.186306476 +0000 UTC m=+6052.494819239" watchObservedRunningTime="2025-11-23 08:26:40.192746793 +0000 UTC m=+6052.501259556" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.204261 4988 scope.go:117] "RemoveContainer" containerID="76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045" Nov 23 08:26:40 crc kubenswrapper[4988]: E1123 08:26:40.205835 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045\": container with ID starting with 76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045 not found: ID does not exist" containerID="76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.205898 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045"} err="failed to get container status \"76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045\": rpc error: code = NotFound desc = could not find container \"76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045\": container with ID starting with 76bad2bf07643ad2bd179f1e81f776242877f33e4e8bc5be333dfc2b3219a045 not found: ID does not exist" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.215024 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5adb072b-636b-4019-8dfe-44f8e8b27439-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.222621 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5adb072b-636b-4019-8dfe-44f8e8b27439" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.225026 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.608020517 podStartE2EDuration="3.22500547s" podCreationTimestamp="2025-11-23 08:26:37 +0000 UTC" firstStartedPulling="2025-11-23 08:26:38.930741922 +0000 UTC m=+6051.239254685" lastFinishedPulling="2025-11-23 08:26:39.547726875 +0000 UTC m=+6051.856239638" observedRunningTime="2025-11-23 08:26:40.213583421 +0000 UTC m=+6052.522096184" watchObservedRunningTime="2025-11-23 08:26:40.22500547 +0000 UTC m=+6052.533518233" Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.473692 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:26:40 crc kubenswrapper[4988]: W1123 08:26:40.485279 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0330452d_55b6_4b5b_98a9_be1342e2f47a.slice/crio-77dd68aa61199923765d69d90dfd45db4fbd0309957da5d9d0427f3717f4a189 WatchSource:0}: Error finding container 77dd68aa61199923765d69d90dfd45db4fbd0309957da5d9d0427f3717f4a189: Status 404 returned error can't find the container with id 77dd68aa61199923765d69d90dfd45db4fbd0309957da5d9d0427f3717f4a189 Nov 23 08:26:40 crc kubenswrapper[4988]: I1123 08:26:40.507637 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5adb072b-636b-4019-8dfe-44f8e8b27439" path="/var/lib/kubelet/pods/5adb072b-636b-4019-8dfe-44f8e8b27439/volumes" Nov 23 08:26:41 crc kubenswrapper[4988]: I1123 08:26:41.191150 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a64aea4-b7f0-4a4f-971c-3312892fe956","Type":"ContainerStarted","Data":"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84"} Nov 23 08:26:41 crc kubenswrapper[4988]: I1123 08:26:41.192740 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerStarted","Data":"77dd68aa61199923765d69d90dfd45db4fbd0309957da5d9d0427f3717f4a189"} Nov 23 08:26:46 crc kubenswrapper[4988]: I1123 08:26:46.242242 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45368a7a-7b66-4d55-a8a7-a306d69e6858","Type":"ContainerStarted","Data":"26789927fe7442dadebc40aa0217b69afd81163f313c4f0e37af2673911687e4"} Nov 23 08:26:46 crc kubenswrapper[4988]: I1123 08:26:46.245167 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerStarted","Data":"c85d4caa8b61dd3fde87b342d41e1b2b1bb6dd16d19d5f8964f001a88669c029"} Nov 23 08:26:46 crc kubenswrapper[4988]: I1123 08:26:46.497797 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:26:46 crc kubenswrapper[4988]: E1123 08:26:46.498302 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:26:47 crc kubenswrapper[4988]: I1123 08:26:47.933246 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 08:26:48 crc kubenswrapper[4988]: I1123 08:26:48.089673 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:48 crc kubenswrapper[4988]: I1123 08:26:48.157104 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:48 crc kubenswrapper[4988]: I1123 08:26:48.328990 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:26:49 crc kubenswrapper[4988]: I1123 08:26:49.274799 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pm8zt" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" containerID="cri-o://66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8" gracePeriod=2 Nov 23 08:26:49 crc kubenswrapper[4988]: I1123 08:26:49.898737 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.026124 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities\") pod \"d678109b-3e4e-441e-aa10-d63b5c435418\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.026411 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content\") pod \"d678109b-3e4e-441e-aa10-d63b5c435418\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.026569 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ttj\" (UniqueName: \"kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj\") pod \"d678109b-3e4e-441e-aa10-d63b5c435418\" (UID: \"d678109b-3e4e-441e-aa10-d63b5c435418\") " Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.026924 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities" (OuterVolumeSpecName: "utilities") pod "d678109b-3e4e-441e-aa10-d63b5c435418" (UID: "d678109b-3e4e-441e-aa10-d63b5c435418"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.027323 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.032293 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj" (OuterVolumeSpecName: "kube-api-access-z4ttj") pod "d678109b-3e4e-441e-aa10-d63b5c435418" (UID: "d678109b-3e4e-441e-aa10-d63b5c435418"). InnerVolumeSpecName "kube-api-access-z4ttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.100370 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d678109b-3e4e-441e-aa10-d63b5c435418" (UID: "d678109b-3e4e-441e-aa10-d63b5c435418"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.129591 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d678109b-3e4e-441e-aa10-d63b5c435418-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.129835 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ttj\" (UniqueName: \"kubernetes.io/projected/d678109b-3e4e-441e-aa10-d63b5c435418-kube-api-access-z4ttj\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.289833 4988 generic.go:334] "Generic (PLEG): container finished" podID="d678109b-3e4e-441e-aa10-d63b5c435418" containerID="66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8" exitCode=0 Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.289882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerDied","Data":"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8"} Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.289923 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pm8zt" event={"ID":"d678109b-3e4e-441e-aa10-d63b5c435418","Type":"ContainerDied","Data":"018f53fae3e001073147ae2459d823c0340e6689287247aeaac6cef4ea96e276"} Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.289937 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pm8zt" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.289946 4988 scope.go:117] "RemoveContainer" containerID="66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.334179 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.340812 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pm8zt"] Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.343631 4988 scope.go:117] "RemoveContainer" containerID="d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.377445 4988 scope.go:117] "RemoveContainer" containerID="22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.448916 4988 scope.go:117] "RemoveContainer" containerID="66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8" Nov 23 08:26:50 crc kubenswrapper[4988]: E1123 08:26:50.449675 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8\": container with ID starting with 66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8 not found: ID does not exist" containerID="66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.449736 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8"} err="failed to get container status \"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8\": rpc error: code = NotFound desc = could not find container \"66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8\": container with ID starting with 66e3e7867e7d6d8d99edff6dcf147dcba7846857f5e7ead1a056efbe68f741b8 not found: ID does not exist" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.449776 4988 scope.go:117] "RemoveContainer" containerID="d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe" Nov 23 08:26:50 crc kubenswrapper[4988]: E1123 08:26:50.450245 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe\": container with ID starting with d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe not found: ID does not exist" containerID="d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.450271 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe"} err="failed to get container status \"d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe\": rpc error: code = NotFound desc = could not find container \"d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe\": container with ID starting with d001a5ef327647811ae7a171694b7708be4dfe6c43534e1196c4d7ec1d4066fe not found: ID does not exist" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.450289 4988 scope.go:117] "RemoveContainer" containerID="22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747" Nov 23 08:26:50 crc kubenswrapper[4988]: E1123 08:26:50.450688 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747\": container with ID starting with 22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747 not found: ID does not exist" containerID="22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.450716 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747"} err="failed to get container status \"22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747\": rpc error: code = NotFound desc = could not find container \"22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747\": container with ID starting with 22a037826cbba5d5fded6127099379af22774011d421573efc6fb936721b3747 not found: ID does not exist" Nov 23 08:26:50 crc kubenswrapper[4988]: I1123 08:26:50.512266 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" path="/var/lib/kubelet/pods/d678109b-3e4e-441e-aa10-d63b5c435418/volumes" Nov 23 08:26:53 crc kubenswrapper[4988]: I1123 08:26:53.325798 4988 generic.go:334] "Generic (PLEG): container finished" podID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerID="c85d4caa8b61dd3fde87b342d41e1b2b1bb6dd16d19d5f8964f001a88669c029" exitCode=0 Nov 23 08:26:53 crc kubenswrapper[4988]: I1123 08:26:53.325892 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerDied","Data":"c85d4caa8b61dd3fde87b342d41e1b2b1bb6dd16d19d5f8964f001a88669c029"} Nov 23 08:26:55 crc kubenswrapper[4988]: I1123 08:26:55.359012 4988 generic.go:334] "Generic (PLEG): container finished" podID="45368a7a-7b66-4d55-a8a7-a306d69e6858" containerID="26789927fe7442dadebc40aa0217b69afd81163f313c4f0e37af2673911687e4" exitCode=0 Nov 23 08:26:55 crc kubenswrapper[4988]: I1123 08:26:55.359116 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45368a7a-7b66-4d55-a8a7-a306d69e6858","Type":"ContainerDied","Data":"26789927fe7442dadebc40aa0217b69afd81163f313c4f0e37af2673911687e4"} Nov 23 08:26:59 crc kubenswrapper[4988]: I1123 08:26:59.399565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerStarted","Data":"4ee132ea100cd08b80becbf5035c6eb44c7f052a5cd5f023d47202400ddf836a"} Nov 23 08:26:59 crc kubenswrapper[4988]: I1123 08:26:59.402705 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45368a7a-7b66-4d55-a8a7-a306d69e6858","Type":"ContainerStarted","Data":"14a18dd7be3d8cdc4c8085b73c800e9ff4d56c4bce14d6ad29d2253c19de62d8"} Nov 23 08:27:01 crc kubenswrapper[4988]: I1123 08:27:01.498084 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:27:01 crc kubenswrapper[4988]: E1123 08:27:01.499059 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:27:03 crc kubenswrapper[4988]: I1123 08:27:03.450631 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerStarted","Data":"61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3"} Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.059396 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db01-account-create-vchh9"] Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.071886 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6plwr"] Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.100087 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db01-account-create-vchh9"] Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.119114 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6plwr"] Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.467863 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"45368a7a-7b66-4d55-a8a7-a306d69e6858","Type":"ContainerStarted","Data":"08f9c0fda3253e692e394ab1a5f6c4d458370f7709901f08e4c02d8a295b1e22"} Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.468492 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.472712 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.502401 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.769519316 podStartE2EDuration="26.502347974s" podCreationTimestamp="2025-11-23 08:26:38 +0000 UTC" firstStartedPulling="2025-11-23 08:26:39.76185855 +0000 UTC m=+6052.070371313" lastFinishedPulling="2025-11-23 08:26:58.494687208 +0000 UTC m=+6070.803199971" observedRunningTime="2025-11-23 08:27:04.500012627 +0000 UTC m=+6076.808525470" watchObservedRunningTime="2025-11-23 08:27:04.502347974 +0000 UTC m=+6076.810860757" Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.529146 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="443a70aa-cf7b-4a3a-a970-b3de700daf85" path="/var/lib/kubelet/pods/443a70aa-cf7b-4a3a-a970-b3de700daf85/volumes" Nov 23 08:27:04 crc kubenswrapper[4988]: I1123 08:27:04.530182 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c814f2-f0b8-497b-b2bf-8b699805f073" path="/var/lib/kubelet/pods/f4c814f2-f0b8-497b-b2bf-8b699805f073/volumes" Nov 23 08:27:06 crc kubenswrapper[4988]: I1123 08:27:06.494835 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerStarted","Data":"96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844"} Nov 23 08:27:06 crc kubenswrapper[4988]: I1123 08:27:06.552919 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.107586792 podStartE2EDuration="28.552885923s" podCreationTimestamp="2025-11-23 08:26:38 +0000 UTC" firstStartedPulling="2025-11-23 08:26:40.488441207 +0000 UTC m=+6052.796953970" lastFinishedPulling="2025-11-23 08:27:05.933740338 +0000 UTC m=+6078.242253101" observedRunningTime="2025-11-23 08:27:06.538080382 +0000 UTC m=+6078.846593165" watchObservedRunningTime="2025-11-23 08:27:06.552885923 +0000 UTC m=+6078.861398726" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.469231 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:09 crc kubenswrapper[4988]: E1123 08:27:09.470107 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="extract-content" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.470131 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="extract-content" Nov 23 08:27:09 crc kubenswrapper[4988]: E1123 08:27:09.470181 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="extract-utilities" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.470219 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="extract-utilities" Nov 23 08:27:09 crc kubenswrapper[4988]: E1123 08:27:09.470264 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.470278 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.470792 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d678109b-3e4e-441e-aa10-d63b5c435418" containerName="registry-server" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.473639 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.490273 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.585662 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.585997 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.586181 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xnd\" (UniqueName: \"kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.687845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.687982 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xnd\" (UniqueName: \"kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.688107 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.688416 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.689076 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.714879 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xnd\" (UniqueName: \"kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd\") pod \"community-operators-xjl8w\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.852805 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.927158 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.928234 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:09 crc kubenswrapper[4988]: I1123 08:27:09.932989 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:10 crc kubenswrapper[4988]: I1123 08:27:10.416579 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:10 crc kubenswrapper[4988]: W1123 08:27:10.418695 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod647e4168_3faf_4598_bbd7_ff3e52e5cc82.slice/crio-61425ac35e0c6a72d073374a11d4d9b9a20792e7a0e686e1e176db472cca3b33 WatchSource:0}: Error finding container 61425ac35e0c6a72d073374a11d4d9b9a20792e7a0e686e1e176db472cca3b33: Status 404 returned error can't find the container with id 61425ac35e0c6a72d073374a11d4d9b9a20792e7a0e686e1e176db472cca3b33 Nov 23 08:27:10 crc kubenswrapper[4988]: I1123 08:27:10.552148 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerStarted","Data":"61425ac35e0c6a72d073374a11d4d9b9a20792e7a0e686e1e176db472cca3b33"} Nov 23 08:27:10 crc kubenswrapper[4988]: I1123 08:27:10.552802 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.762200 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.764238 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.764400 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" containerName="openstackclient" containerID="cri-o://8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded" gracePeriod=2 Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.834160 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:27:11 crc kubenswrapper[4988]: E1123 08:27:11.834625 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" containerName="openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.834648 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" containerName="openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.834889 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" containerName="openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.835588 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.860172 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.861116 4988 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ba033bc0-567a-4098-bb94-500843c4f18b" podUID="a8f050d7-63e6-4f3a-b0a8-f38370327852" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.936505 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9ls\" (UniqueName: \"kubernetes.io/projected/a8f050d7-63e6-4f3a-b0a8-f38370327852-kube-api-access-ww9ls\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.936677 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.936709 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:11 crc kubenswrapper[4988]: I1123 08:27:11.936817 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.038137 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.038330 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.039035 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.039331 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9ls\" (UniqueName: \"kubernetes.io/projected/a8f050d7-63e6-4f3a-b0a8-f38370327852-kube-api-access-ww9ls\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.039475 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.046035 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-openstack-config-secret\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.047599 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8f050d7-63e6-4f3a-b0a8-f38370327852-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.058337 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9ls\" (UniqueName: \"kubernetes.io/projected/a8f050d7-63e6-4f3a-b0a8-f38370327852-kube-api-access-ww9ls\") pod \"openstackclient\" (UID: \"a8f050d7-63e6-4f3a-b0a8-f38370327852\") " pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.142618 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.158250 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.177107 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.179769 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.184019 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.184318 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244370 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244523 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244574 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244641 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244670 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244692 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ssxm\" (UniqueName: \"kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.244753 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354346 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354465 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354489 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354544 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354574 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354598 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ssxm\" (UniqueName: \"kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.354633 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.355076 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.355530 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.358162 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.358826 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.359924 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.363312 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.385113 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ssxm\" (UniqueName: \"kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm\") pod \"ceilometer-0\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " pod="openstack/ceilometer-0" Nov 23 08:27:12 crc kubenswrapper[4988]: I1123 08:27:12.506383 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.027946 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:27:13 crc kubenswrapper[4988]: W1123 08:27:13.038030 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8f050d7_63e6_4f3a_b0a8_f38370327852.slice/crio-300941374fa0e13ff2829ee922b3efddf36eb465a596ad09fe196dfc409ad357 WatchSource:0}: Error finding container 300941374fa0e13ff2829ee922b3efddf36eb465a596ad09fe196dfc409ad357: Status 404 returned error can't find the container with id 300941374fa0e13ff2829ee922b3efddf36eb465a596ad09fe196dfc409ad357 Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.133508 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:13 crc kubenswrapper[4988]: W1123 08:27:13.137134 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f097e75_cc9d_4a21_bf36_1320565d2be7.slice/crio-f4efe781323d99bdb3b6167886787d86ca013b0b55c8c5a4cd4780ed0723c614 WatchSource:0}: Error finding container f4efe781323d99bdb3b6167886787d86ca013b0b55c8c5a4cd4780ed0723c614: Status 404 returned error can't find the container with id f4efe781323d99bdb3b6167886787d86ca013b0b55c8c5a4cd4780ed0723c614 Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.359621 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.595196 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerStarted","Data":"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7"} Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.596835 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerStarted","Data":"f4efe781323d99bdb3b6167886787d86ca013b0b55c8c5a4cd4780ed0723c614"} Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.598170 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a8f050d7-63e6-4f3a-b0a8-f38370327852","Type":"ContainerStarted","Data":"300941374fa0e13ff2829ee922b3efddf36eb465a596ad09fe196dfc409ad357"} Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.598607 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="thanos-sidecar" containerID="cri-o://96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844" gracePeriod=600 Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.598616 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="prometheus" containerID="cri-o://4ee132ea100cd08b80becbf5035c6eb44c7f052a5cd5f023d47202400ddf836a" gracePeriod=600 Nov 23 08:27:13 crc kubenswrapper[4988]: I1123 08:27:13.598636 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="config-reloader" containerID="cri-o://61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3" gracePeriod=600 Nov 23 08:27:13 crc kubenswrapper[4988]: E1123 08:27:13.983599 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod647e4168_3faf_4598_bbd7_ff3e52e5cc82.slice/crio-6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0330452d_55b6_4b5b_98a9_be1342e2f47a.slice/crio-96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0330452d_55b6_4b5b_98a9_be1342e2f47a.slice/crio-61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0330452d_55b6_4b5b_98a9_be1342e2f47a.slice/crio-conmon-96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.385669 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.498280 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:27:14 crc kubenswrapper[4988]: E1123 08:27:14.498589 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.508744 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret\") pod \"ba033bc0-567a-4098-bb94-500843c4f18b\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.508884 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle\") pod \"ba033bc0-567a-4098-bb94-500843c4f18b\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.508949 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szfvn\" (UniqueName: \"kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn\") pod \"ba033bc0-567a-4098-bb94-500843c4f18b\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.509110 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config\") pod \"ba033bc0-567a-4098-bb94-500843c4f18b\" (UID: \"ba033bc0-567a-4098-bb94-500843c4f18b\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.525158 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn" (OuterVolumeSpecName: "kube-api-access-szfvn") pod "ba033bc0-567a-4098-bb94-500843c4f18b" (UID: "ba033bc0-567a-4098-bb94-500843c4f18b"). InnerVolumeSpecName "kube-api-access-szfvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.546759 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ba033bc0-567a-4098-bb94-500843c4f18b" (UID: "ba033bc0-567a-4098-bb94-500843c4f18b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.546867 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba033bc0-567a-4098-bb94-500843c4f18b" (UID: "ba033bc0-567a-4098-bb94-500843c4f18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.612186 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ba033bc0-567a-4098-bb94-500843c4f18b" (UID: "ba033bc0-567a-4098-bb94-500843c4f18b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.612792 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.612816 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.612825 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba033bc0-567a-4098-bb94-500843c4f18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.612834 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szfvn\" (UniqueName: \"kubernetes.io/projected/ba033bc0-567a-4098-bb94-500843c4f18b-kube-api-access-szfvn\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.633982 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a8f050d7-63e6-4f3a-b0a8-f38370327852","Type":"ContainerStarted","Data":"8e2504263ffc37ae4af8d005d642703cffc0464e51845608d8de33b26392a244"} Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642511 4988 generic.go:334] "Generic (PLEG): container finished" podID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerID="96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844" exitCode=0 Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642554 4988 generic.go:334] "Generic (PLEG): container finished" podID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerID="61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3" exitCode=0 Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642564 4988 generic.go:334] "Generic (PLEG): container finished" podID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerID="4ee132ea100cd08b80becbf5035c6eb44c7f052a5cd5f023d47202400ddf836a" exitCode=0 Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642596 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerDied","Data":"96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844"} Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642649 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerDied","Data":"61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3"} Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.642663 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerDied","Data":"4ee132ea100cd08b80becbf5035c6eb44c7f052a5cd5f023d47202400ddf836a"} Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.644912 4988 generic.go:334] "Generic (PLEG): container finished" podID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerID="6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7" exitCode=0 Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.646357 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerDied","Data":"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7"} Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.648835 4988 generic.go:334] "Generic (PLEG): container finished" podID="ba033bc0-567a-4098-bb94-500843c4f18b" containerID="8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded" exitCode=137 Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.648890 4988 scope.go:117] "RemoveContainer" containerID="8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.649042 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.654009 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.6539885869999997 podStartE2EDuration="3.653988587s" podCreationTimestamp="2025-11-23 08:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:27:14.653343681 +0000 UTC m=+6086.961856444" watchObservedRunningTime="2025-11-23 08:27:14.653988587 +0000 UTC m=+6086.962501350" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.677909 4988 scope.go:117] "RemoveContainer" containerID="8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded" Nov 23 08:27:14 crc kubenswrapper[4988]: E1123 08:27:14.684226 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded\": container with ID starting with 8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded not found: ID does not exist" containerID="8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.684274 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded"} err="failed to get container status \"8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded\": rpc error: code = NotFound desc = could not find container \"8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded\": container with ID starting with 8f2660d87f45fb76b7b53e8e29ffd845fd78de70660244cdd8a1a6d7af761ded not found: ID does not exist" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.810691 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925389 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp28b\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925450 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925616 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925698 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925728 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925753 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925780 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.925821 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0\") pod \"0330452d-55b6-4b5b-98a9-be1342e2f47a\" (UID: \"0330452d-55b6-4b5b-98a9-be1342e2f47a\") " Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.926712 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.931820 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.932740 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b" (OuterVolumeSpecName: "kube-api-access-qp28b") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "kube-api-access-qp28b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.933044 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out" (OuterVolumeSpecName: "config-out") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.934602 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.940211 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config" (OuterVolumeSpecName: "config") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.962848 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 08:27:14 crc kubenswrapper[4988]: I1123 08:27:14.963043 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config" (OuterVolumeSpecName: "web-config") pod "0330452d-55b6-4b5b-98a9-be1342e2f47a" (UID: "0330452d-55b6-4b5b-98a9-be1342e2f47a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028769 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp28b\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-kube-api-access-qp28b\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028804 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028842 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") on node \"crc\" " Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028854 4988 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0330452d-55b6-4b5b-98a9-be1342e2f47a-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028865 4988 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0330452d-55b6-4b5b-98a9-be1342e2f47a-config-out\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028874 4988 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028886 4988 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0330452d-55b6-4b5b-98a9-be1342e2f47a-web-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.028895 4988 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0330452d-55b6-4b5b-98a9-be1342e2f47a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.073416 4988 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.073636 4988 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233") on node "crc" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.131497 4988 reconciler_common.go:293] "Volume detached for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.665348 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.668520 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0330452d-55b6-4b5b-98a9-be1342e2f47a","Type":"ContainerDied","Data":"77dd68aa61199923765d69d90dfd45db4fbd0309957da5d9d0427f3717f4a189"} Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.668572 4988 scope.go:117] "RemoveContainer" containerID="96a37a2106d36ecc3d7655fcb1d1252c4ee7e1ef79366a7d7146b8cf20a50844" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.671390 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerStarted","Data":"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6"} Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.715117 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.729878 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.752364 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:15 crc kubenswrapper[4988]: E1123 08:27:15.754559 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="thanos-sidecar" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754576 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="thanos-sidecar" Nov 23 08:27:15 crc kubenswrapper[4988]: E1123 08:27:15.754599 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="config-reloader" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754606 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="config-reloader" Nov 23 08:27:15 crc kubenswrapper[4988]: E1123 08:27:15.754621 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="prometheus" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754628 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="prometheus" Nov 23 08:27:15 crc kubenswrapper[4988]: E1123 08:27:15.754644 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="init-config-reloader" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754650 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="init-config-reloader" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754824 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="config-reloader" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754841 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="prometheus" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.754852 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" containerName="thanos-sidecar" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.758478 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.760597 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.760741 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.760943 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.762729 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.763879 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7tdtl" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.764667 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.764700 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.770661 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848570 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848655 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848699 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848744 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscm2\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-kube-api-access-vscm2\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848791 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848817 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/440c2cc5-deab-44af-b561-072f18b90f23-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848887 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.848952 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/440c2cc5-deab-44af-b561-072f18b90f23-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.849066 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.849110 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.849144 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951439 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscm2\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-kube-api-access-vscm2\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951509 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/440c2cc5-deab-44af-b561-072f18b90f23-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951606 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/440c2cc5-deab-44af-b561-072f18b90f23-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951762 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951833 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951884 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951945 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.951972 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.952667 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/440c2cc5-deab-44af-b561-072f18b90f23-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.955711 4988 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.955759 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d4023467469c5ca15534b89f41b46ca542b0822c9f59a71a8a2710844d21373/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.956562 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.958502 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/440c2cc5-deab-44af-b561-072f18b90f23-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.959344 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.959636 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.960219 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.961644 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.966034 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.970213 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/440c2cc5-deab-44af-b561-072f18b90f23-config\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.972887 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscm2\" (UniqueName: \"kubernetes.io/projected/440c2cc5-deab-44af-b561-072f18b90f23-kube-api-access-vscm2\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:15 crc kubenswrapper[4988]: I1123 08:27:15.995361 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-513a8398-efc1-4596-a6f8-5d2b1bbdd233\") pod \"prometheus-metric-storage-0\" (UID: \"440c2cc5-deab-44af-b561-072f18b90f23\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:16 crc kubenswrapper[4988]: I1123 08:27:16.113662 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:16 crc kubenswrapper[4988]: I1123 08:27:16.510765 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0330452d-55b6-4b5b-98a9-be1342e2f47a" path="/var/lib/kubelet/pods/0330452d-55b6-4b5b-98a9-be1342e2f47a/volumes" Nov 23 08:27:16 crc kubenswrapper[4988]: I1123 08:27:16.512860 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba033bc0-567a-4098-bb94-500843c4f18b" path="/var/lib/kubelet/pods/ba033bc0-567a-4098-bb94-500843c4f18b/volumes" Nov 23 08:27:17 crc kubenswrapper[4988]: I1123 08:27:17.704044 4988 generic.go:334] "Generic (PLEG): container finished" podID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerID="38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6" exitCode=0 Nov 23 08:27:17 crc kubenswrapper[4988]: I1123 08:27:17.704096 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerDied","Data":"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6"} Nov 23 08:27:18 crc kubenswrapper[4988]: I1123 08:27:18.706179 4988 scope.go:117] "RemoveContainer" containerID="61da122dfc1b2dbe83673da316d9d96425d597f6782a5a0664a8f7f4ced3b7d3" Nov 23 08:27:18 crc kubenswrapper[4988]: I1123 08:27:18.837411 4988 scope.go:117] "RemoveContainer" containerID="4ee132ea100cd08b80becbf5035c6eb44c7f052a5cd5f023d47202400ddf836a" Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.027756 4988 scope.go:117] "RemoveContainer" containerID="c85d4caa8b61dd3fde87b342d41e1b2b1bb6dd16d19d5f8964f001a88669c029" Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.220281 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:27:19 crc kubenswrapper[4988]: W1123 08:27:19.229257 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod440c2cc5_deab_44af_b561_072f18b90f23.slice/crio-1d38b14ed2513f32c2aa8221ef6a1930bb741210966beb3a1101d1803cce2ddc WatchSource:0}: Error finding container 1d38b14ed2513f32c2aa8221ef6a1930bb741210966beb3a1101d1803cce2ddc: Status 404 returned error can't find the container with id 1d38b14ed2513f32c2aa8221ef6a1930bb741210966beb3a1101d1803cce2ddc Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.770021 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerStarted","Data":"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4"} Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.773519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerStarted","Data":"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17"} Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.775234 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerStarted","Data":"1d38b14ed2513f32c2aa8221ef6a1930bb741210966beb3a1101d1803cce2ddc"} Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.853068 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:19 crc kubenswrapper[4988]: I1123 08:27:19.853113 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:20 crc kubenswrapper[4988]: I1123 08:27:20.787566 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerStarted","Data":"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de"} Nov 23 08:27:20 crc kubenswrapper[4988]: I1123 08:27:20.915873 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xjl8w" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="registry-server" probeResult="failure" output=< Nov 23 08:27:20 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:27:20 crc kubenswrapper[4988]: > Nov 23 08:27:21 crc kubenswrapper[4988]: I1123 08:27:21.804418 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerStarted","Data":"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7"} Nov 23 08:27:22 crc kubenswrapper[4988]: I1123 08:27:22.816301 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerStarted","Data":"046424854abe18be6e2d9cbd649180f613f2229e30d0842aa02dd91056d8f07d"} Nov 23 08:27:22 crc kubenswrapper[4988]: I1123 08:27:22.862636 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xjl8w" podStartSLOduration=9.656545642 podStartE2EDuration="13.862619454s" podCreationTimestamp="2025-11-23 08:27:09 +0000 UTC" firstStartedPulling="2025-11-23 08:27:14.647451817 +0000 UTC m=+6086.955964580" lastFinishedPulling="2025-11-23 08:27:18.853525619 +0000 UTC m=+6091.162038392" observedRunningTime="2025-11-23 08:27:19.79092213 +0000 UTC m=+6092.099434903" watchObservedRunningTime="2025-11-23 08:27:22.862619454 +0000 UTC m=+6095.171132217" Nov 23 08:27:23 crc kubenswrapper[4988]: I1123 08:27:23.838438 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerStarted","Data":"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7"} Nov 23 08:27:23 crc kubenswrapper[4988]: I1123 08:27:23.838831 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:27:23 crc kubenswrapper[4988]: I1123 08:27:23.869794 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.180710481 podStartE2EDuration="11.869760297s" podCreationTimestamp="2025-11-23 08:27:12 +0000 UTC" firstStartedPulling="2025-11-23 08:27:13.140298616 +0000 UTC m=+6085.448811379" lastFinishedPulling="2025-11-23 08:27:22.829348432 +0000 UTC m=+6095.137861195" observedRunningTime="2025-11-23 08:27:23.863465633 +0000 UTC m=+6096.171978396" watchObservedRunningTime="2025-11-23 08:27:23.869760297 +0000 UTC m=+6096.178273060" Nov 23 08:27:27 crc kubenswrapper[4988]: I1123 08:27:27.495915 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:27:27 crc kubenswrapper[4988]: E1123 08:27:27.496953 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.608705 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-6x92q"] Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.610000 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.661666 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-6x92q"] Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.736590 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwlv2\" (UniqueName: \"kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.736871 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.807156 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-1f30-account-create-5plkb"] Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.809011 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.813169 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.816899 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1f30-account-create-5plkb"] Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.838953 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwlv2\" (UniqueName: \"kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.839065 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.839918 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.861965 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwlv2\" (UniqueName: \"kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2\") pod \"aodh-db-create-6x92q\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.928832 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.940450 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99f29\" (UniqueName: \"kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:28 crc kubenswrapper[4988]: I1123 08:27:28.940532 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.042658 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.042893 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99f29\" (UniqueName: \"kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.043495 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.065566 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99f29\" (UniqueName: \"kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29\") pod \"aodh-1f30-account-create-5plkb\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.128689 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.457882 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-6x92q"] Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.654230 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-1f30-account-create-5plkb"] Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.901134 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f30-account-create-5plkb" event={"ID":"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8","Type":"ContainerStarted","Data":"6d0873795938a4bfdb3b38aac7a68a066db6fe941091449701e75bd7781c6f8d"} Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.901180 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f30-account-create-5plkb" event={"ID":"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8","Type":"ContainerStarted","Data":"b23b295e5d2b6d3362c8197e51531571495215576de0d2bf6efb0b011319372d"} Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.905254 4988 generic.go:334] "Generic (PLEG): container finished" podID="84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" containerID="6d1afb6861c4e1be1149b542c9375880ac9db63c06e92a2b6514ec5950499c90" exitCode=0 Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.905294 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6x92q" event={"ID":"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27","Type":"ContainerDied","Data":"6d1afb6861c4e1be1149b542c9375880ac9db63c06e92a2b6514ec5950499c90"} Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.905316 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6x92q" event={"ID":"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27","Type":"ContainerStarted","Data":"a0baeb4ec09753d3700ca14d74789bff2f9a0b7297cb90fcd6b1e9a5912d2364"} Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.911586 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.928472 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-1f30-account-create-5plkb" podStartSLOduration=1.928084309 podStartE2EDuration="1.928084309s" podCreationTimestamp="2025-11-23 08:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:27:29.922244277 +0000 UTC m=+6102.230757060" watchObservedRunningTime="2025-11-23 08:27:29.928084309 +0000 UTC m=+6102.236597072" Nov 23 08:27:29 crc kubenswrapper[4988]: I1123 08:27:29.973778 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:30 crc kubenswrapper[4988]: I1123 08:27:30.158348 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:30 crc kubenswrapper[4988]: I1123 08:27:30.919545 4988 generic.go:334] "Generic (PLEG): container finished" podID="440c2cc5-deab-44af-b561-072f18b90f23" containerID="046424854abe18be6e2d9cbd649180f613f2229e30d0842aa02dd91056d8f07d" exitCode=0 Nov 23 08:27:30 crc kubenswrapper[4988]: I1123 08:27:30.919644 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerDied","Data":"046424854abe18be6e2d9cbd649180f613f2229e30d0842aa02dd91056d8f07d"} Nov 23 08:27:30 crc kubenswrapper[4988]: I1123 08:27:30.922562 4988 generic.go:334] "Generic (PLEG): container finished" podID="0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" containerID="6d0873795938a4bfdb3b38aac7a68a066db6fe941091449701e75bd7781c6f8d" exitCode=0 Nov 23 08:27:30 crc kubenswrapper[4988]: I1123 08:27:30.922810 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f30-account-create-5plkb" event={"ID":"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8","Type":"ContainerDied","Data":"6d0873795938a4bfdb3b38aac7a68a066db6fe941091449701e75bd7781c6f8d"} Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.047865 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-rrjhf"] Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.061396 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-rrjhf"] Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.411207 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.495295 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts\") pod \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.495490 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwlv2\" (UniqueName: \"kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2\") pod \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\" (UID: \"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27\") " Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.496235 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" (UID: "84c4cb7e-e9e5-417c-8ae2-60d15b0abd27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.501589 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2" (OuterVolumeSpecName: "kube-api-access-mwlv2") pod "84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" (UID: "84c4cb7e-e9e5-417c-8ae2-60d15b0abd27"). InnerVolumeSpecName "kube-api-access-mwlv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.599374 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwlv2\" (UniqueName: \"kubernetes.io/projected/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-kube-api-access-mwlv2\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.601310 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.942522 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-6x92q" event={"ID":"84c4cb7e-e9e5-417c-8ae2-60d15b0abd27","Type":"ContainerDied","Data":"a0baeb4ec09753d3700ca14d74789bff2f9a0b7297cb90fcd6b1e9a5912d2364"} Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.942861 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0baeb4ec09753d3700ca14d74789bff2f9a0b7297cb90fcd6b1e9a5912d2364" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.942643 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-6x92q" Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.946408 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerStarted","Data":"3db5f53270c9023e44f93b4e2d4131eaf4444ce2142806d1e18d450049d57935"} Nov 23 08:27:31 crc kubenswrapper[4988]: I1123 08:27:31.946872 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xjl8w" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="registry-server" containerID="cri-o://a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4" gracePeriod=2 Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.304113 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.414687 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.415299 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts\") pod \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.415348 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99f29\" (UniqueName: \"kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29\") pod \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\" (UID: \"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8\") " Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.417292 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" (UID: "0f4dee81-335a-48bb-aa1f-22a74fc5a3a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.421936 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29" (OuterVolumeSpecName: "kube-api-access-99f29") pod "0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" (UID: "0f4dee81-335a-48bb-aa1f-22a74fc5a3a8"). InnerVolumeSpecName "kube-api-access-99f29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.507869 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247f0a58-e456-4673-8ca8-15e12ab2af71" path="/var/lib/kubelet/pods/247f0a58-e456-4673-8ca8-15e12ab2af71/volumes" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.517051 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6xnd\" (UniqueName: \"kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd\") pod \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.517270 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content\") pod \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.517422 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities\") pod \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\" (UID: \"647e4168-3faf-4598-bbd7-ff3e52e5cc82\") " Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.517905 4988 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.517932 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99f29\" (UniqueName: \"kubernetes.io/projected/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8-kube-api-access-99f29\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.518444 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities" (OuterVolumeSpecName: "utilities") pod "647e4168-3faf-4598-bbd7-ff3e52e5cc82" (UID: "647e4168-3faf-4598-bbd7-ff3e52e5cc82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.520354 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd" (OuterVolumeSpecName: "kube-api-access-d6xnd") pod "647e4168-3faf-4598-bbd7-ff3e52e5cc82" (UID: "647e4168-3faf-4598-bbd7-ff3e52e5cc82"). InnerVolumeSpecName "kube-api-access-d6xnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.572876 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "647e4168-3faf-4598-bbd7-ff3e52e5cc82" (UID: "647e4168-3faf-4598-bbd7-ff3e52e5cc82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.620014 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.620043 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/647e4168-3faf-4598-bbd7-ff3e52e5cc82-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.620053 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6xnd\" (UniqueName: \"kubernetes.io/projected/647e4168-3faf-4598-bbd7-ff3e52e5cc82-kube-api-access-d6xnd\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.981060 4988 generic.go:334] "Generic (PLEG): container finished" podID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerID="a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4" exitCode=0 Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.981297 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerDied","Data":"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4"} Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.981361 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xjl8w" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.981411 4988 scope.go:117] "RemoveContainer" containerID="a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.981369 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xjl8w" event={"ID":"647e4168-3faf-4598-bbd7-ff3e52e5cc82","Type":"ContainerDied","Data":"61425ac35e0c6a72d073374a11d4d9b9a20792e7a0e686e1e176db472cca3b33"} Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.991181 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-1f30-account-create-5plkb" Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.994161 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-1f30-account-create-5plkb" event={"ID":"0f4dee81-335a-48bb-aa1f-22a74fc5a3a8","Type":"ContainerDied","Data":"b23b295e5d2b6d3362c8197e51531571495215576de0d2bf6efb0b011319372d"} Nov 23 08:27:32 crc kubenswrapper[4988]: I1123 08:27:32.994255 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23b295e5d2b6d3362c8197e51531571495215576de0d2bf6efb0b011319372d" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.052118 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.055077 4988 scope.go:117] "RemoveContainer" containerID="38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.062814 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xjl8w"] Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.127995 4988 scope.go:117] "RemoveContainer" containerID="6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.156526 4988 scope.go:117] "RemoveContainer" containerID="a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4" Nov 23 08:27:33 crc kubenswrapper[4988]: E1123 08:27:33.156963 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4\": container with ID starting with a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4 not found: ID does not exist" containerID="a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.156994 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4"} err="failed to get container status \"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4\": rpc error: code = NotFound desc = could not find container \"a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4\": container with ID starting with a65a3dfe9f962bbbe021b192cb510d8932e0345a783fdf394ebfaf2e502261e4 not found: ID does not exist" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.157015 4988 scope.go:117] "RemoveContainer" containerID="38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6" Nov 23 08:27:33 crc kubenswrapper[4988]: E1123 08:27:33.157615 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6\": container with ID starting with 38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6 not found: ID does not exist" containerID="38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.157640 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6"} err="failed to get container status \"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6\": rpc error: code = NotFound desc = could not find container \"38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6\": container with ID starting with 38c2030a47da33f74460d231dafedcac233896fc43d9d4623fdda23ac96a85f6 not found: ID does not exist" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.157653 4988 scope.go:117] "RemoveContainer" containerID="6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7" Nov 23 08:27:33 crc kubenswrapper[4988]: E1123 08:27:33.157896 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7\": container with ID starting with 6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7 not found: ID does not exist" containerID="6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7" Nov 23 08:27:33 crc kubenswrapper[4988]: I1123 08:27:33.157922 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7"} err="failed to get container status \"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7\": rpc error: code = NotFound desc = could not find container \"6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7\": container with ID starting with 6f2662c5eb013ab798e7cc6f090c562a2933a1342475ff926576a38ea11445d7 not found: ID does not exist" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.348104 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-mt9sl"] Nov 23 08:27:34 crc kubenswrapper[4988]: E1123 08:27:34.349028 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349047 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4988]: E1123 08:27:34.349079 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" containerName="mariadb-database-create" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349090 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" containerName="mariadb-database-create" Nov 23 08:27:34 crc kubenswrapper[4988]: E1123 08:27:34.349125 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="extract-utilities" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349133 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="extract-utilities" Nov 23 08:27:34 crc kubenswrapper[4988]: E1123 08:27:34.349150 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="extract-content" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349160 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="extract-content" Nov 23 08:27:34 crc kubenswrapper[4988]: E1123 08:27:34.349177 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" containerName="mariadb-account-create" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349187 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" containerName="mariadb-account-create" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349508 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" containerName="registry-server" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349521 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" containerName="mariadb-account-create" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.349552 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" containerName="mariadb-database-create" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.350462 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.355453 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-x9fjg" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.356395 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.357094 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.359669 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.370281 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mt9sl"] Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.470094 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.470509 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.470721 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.470821 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4lr\" (UniqueName: \"kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.509299 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647e4168-3faf-4598-bbd7-ff3e52e5cc82" path="/var/lib/kubelet/pods/647e4168-3faf-4598-bbd7-ff3e52e5cc82/volumes" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.572242 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.572476 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4lr\" (UniqueName: \"kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.572526 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.572616 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.578191 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.579429 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.591312 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.591399 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4lr\" (UniqueName: \"kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr\") pod \"aodh-db-sync-mt9sl\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:34 crc kubenswrapper[4988]: I1123 08:27:34.753799 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:35 crc kubenswrapper[4988]: I1123 08:27:35.018782 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerStarted","Data":"f5a3443919c3d2a4e92ecc41138ac8d9d87f93a8d30042118bb2c855b1888bd0"} Nov 23 08:27:35 crc kubenswrapper[4988]: I1123 08:27:35.019048 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"440c2cc5-deab-44af-b561-072f18b90f23","Type":"ContainerStarted","Data":"ddbcb96e6cef1c86100938ca2535cc409b3261b1a9cf947403e9660b89367587"} Nov 23 08:27:35 crc kubenswrapper[4988]: I1123 08:27:35.049706 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.049685644 podStartE2EDuration="20.049685644s" podCreationTimestamp="2025-11-23 08:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:27:35.041854683 +0000 UTC m=+6107.350367456" watchObservedRunningTime="2025-11-23 08:27:35.049685644 +0000 UTC m=+6107.358198397" Nov 23 08:27:35 crc kubenswrapper[4988]: I1123 08:27:35.285007 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mt9sl"] Nov 23 08:27:36 crc kubenswrapper[4988]: I1123 08:27:36.028894 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mt9sl" event={"ID":"63f4ea60-7e24-4d82-9109-0c8494a900ba","Type":"ContainerStarted","Data":"344f933fcf65fee95443c9674f92e83c6e9366252cd287dcbe95ec14304ca057"} Nov 23 08:27:36 crc kubenswrapper[4988]: I1123 08:27:36.116891 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:38 crc kubenswrapper[4988]: I1123 08:27:38.505907 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:27:38 crc kubenswrapper[4988]: E1123 08:27:38.507091 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:27:40 crc kubenswrapper[4988]: I1123 08:27:40.085950 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mt9sl" event={"ID":"63f4ea60-7e24-4d82-9109-0c8494a900ba","Type":"ContainerStarted","Data":"0aac1b9b2f1434d02ec58308da5593793acc9f4e61b9bf1e0aafbd617272b7e4"} Nov 23 08:27:40 crc kubenswrapper[4988]: I1123 08:27:40.107349 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-mt9sl" podStartSLOduration=2.24256593 podStartE2EDuration="6.107322473s" podCreationTimestamp="2025-11-23 08:27:34 +0000 UTC" firstStartedPulling="2025-11-23 08:27:35.28639404 +0000 UTC m=+6107.594906803" lastFinishedPulling="2025-11-23 08:27:39.151150583 +0000 UTC m=+6111.459663346" observedRunningTime="2025-11-23 08:27:40.103213002 +0000 UTC m=+6112.411725785" watchObservedRunningTime="2025-11-23 08:27:40.107322473 +0000 UTC m=+6112.415835266" Nov 23 08:27:42 crc kubenswrapper[4988]: I1123 08:27:42.106406 4988 generic.go:334] "Generic (PLEG): container finished" podID="63f4ea60-7e24-4d82-9109-0c8494a900ba" containerID="0aac1b9b2f1434d02ec58308da5593793acc9f4e61b9bf1e0aafbd617272b7e4" exitCode=0 Nov 23 08:27:42 crc kubenswrapper[4988]: I1123 08:27:42.106497 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mt9sl" event={"ID":"63f4ea60-7e24-4d82-9109-0c8494a900ba","Type":"ContainerDied","Data":"0aac1b9b2f1434d02ec58308da5593793acc9f4e61b9bf1e0aafbd617272b7e4"} Nov 23 08:27:42 crc kubenswrapper[4988]: I1123 08:27:42.516919 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.495662 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.668014 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data\") pod \"63f4ea60-7e24-4d82-9109-0c8494a900ba\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.668223 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t4lr\" (UniqueName: \"kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr\") pod \"63f4ea60-7e24-4d82-9109-0c8494a900ba\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.668515 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts\") pod \"63f4ea60-7e24-4d82-9109-0c8494a900ba\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.668672 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle\") pod \"63f4ea60-7e24-4d82-9109-0c8494a900ba\" (UID: \"63f4ea60-7e24-4d82-9109-0c8494a900ba\") " Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.674061 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr" (OuterVolumeSpecName: "kube-api-access-2t4lr") pod "63f4ea60-7e24-4d82-9109-0c8494a900ba" (UID: "63f4ea60-7e24-4d82-9109-0c8494a900ba"). InnerVolumeSpecName "kube-api-access-2t4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.674894 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts" (OuterVolumeSpecName: "scripts") pod "63f4ea60-7e24-4d82-9109-0c8494a900ba" (UID: "63f4ea60-7e24-4d82-9109-0c8494a900ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.714973 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63f4ea60-7e24-4d82-9109-0c8494a900ba" (UID: "63f4ea60-7e24-4d82-9109-0c8494a900ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.744083 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data" (OuterVolumeSpecName: "config-data") pod "63f4ea60-7e24-4d82-9109-0c8494a900ba" (UID: "63f4ea60-7e24-4d82-9109-0c8494a900ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.774179 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.774230 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t4lr\" (UniqueName: \"kubernetes.io/projected/63f4ea60-7e24-4d82-9109-0c8494a900ba-kube-api-access-2t4lr\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.774245 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:43 crc kubenswrapper[4988]: I1123 08:27:43.774255 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63f4ea60-7e24-4d82-9109-0c8494a900ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.133216 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mt9sl" event={"ID":"63f4ea60-7e24-4d82-9109-0c8494a900ba","Type":"ContainerDied","Data":"344f933fcf65fee95443c9674f92e83c6e9366252cd287dcbe95ec14304ca057"} Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.133606 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344f933fcf65fee95443c9674f92e83c6e9366252cd287dcbe95ec14304ca057" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.133296 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mt9sl" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.441892 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 23 08:27:44 crc kubenswrapper[4988]: E1123 08:27:44.442607 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f4ea60-7e24-4d82-9109-0c8494a900ba" containerName="aodh-db-sync" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.442638 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f4ea60-7e24-4d82-9109-0c8494a900ba" containerName="aodh-db-sync" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.443000 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f4ea60-7e24-4d82-9109-0c8494a900ba" containerName="aodh-db-sync" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.447331 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.450273 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-x9fjg" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.450562 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.450708 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.465905 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.489461 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6g7r\" (UniqueName: \"kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.489597 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.489638 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.489671 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.595546 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.595607 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.595636 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.595711 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6g7r\" (UniqueName: \"kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.612870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.613906 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.624513 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6g7r\" (UniqueName: \"kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.632188 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data\") pod \"aodh-0\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " pod="openstack/aodh-0" Nov 23 08:27:44 crc kubenswrapper[4988]: I1123 08:27:44.784702 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:27:45 crc kubenswrapper[4988]: I1123 08:27:45.282836 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:27:45 crc kubenswrapper[4988]: W1123 08:27:45.283568 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb43a0a7_39cd_4925_a5af_2d9cf760237d.slice/crio-b2acecaa7324f7951b92114355cfaab2534f021e68fa374dfb5f594ffd438e11 WatchSource:0}: Error finding container b2acecaa7324f7951b92114355cfaab2534f021e68fa374dfb5f594ffd438e11: Status 404 returned error can't find the container with id b2acecaa7324f7951b92114355cfaab2534f021e68fa374dfb5f594ffd438e11 Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.115383 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.125264 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.162909 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerStarted","Data":"10b3c4745801ddf84a06516af9b8c2b4235083c3730ea04eb3ce46caa8913b87"} Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.162950 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerStarted","Data":"b2acecaa7324f7951b92114355cfaab2534f021e68fa374dfb5f594ffd438e11"} Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.171248 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.647909 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.648164 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-central-agent" containerID="cri-o://7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17" gracePeriod=30 Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.648219 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="proxy-httpd" containerID="cri-o://3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7" gracePeriod=30 Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.648258 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="sg-core" containerID="cri-o://0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7" gracePeriod=30 Nov 23 08:27:46 crc kubenswrapper[4988]: I1123 08:27:46.648350 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-notification-agent" containerID="cri-o://eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de" gracePeriod=30 Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.177753 4988 generic.go:334] "Generic (PLEG): container finished" podID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerID="3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7" exitCode=0 Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.178118 4988 generic.go:334] "Generic (PLEG): container finished" podID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerID="0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7" exitCode=2 Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.178128 4988 generic.go:334] "Generic (PLEG): container finished" podID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerID="7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17" exitCode=0 Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.177903 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerDied","Data":"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7"} Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.178270 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerDied","Data":"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7"} Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.178289 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerDied","Data":"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17"} Nov 23 08:27:47 crc kubenswrapper[4988]: I1123 08:27:47.898834 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 23 08:27:48 crc kubenswrapper[4988]: I1123 08:27:48.191357 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerStarted","Data":"12ce64b9e0a6bad4e1fdadf09fd15fbebf6255d96d9d4292343b61c9e608be5d"} Nov 23 08:27:49 crc kubenswrapper[4988]: I1123 08:27:49.206683 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerStarted","Data":"bc4a9e0545b68e1cce2e3346df0fca2623cf7dd3add31bdd1a668a55c1e294dc"} Nov 23 08:27:49 crc kubenswrapper[4988]: I1123 08:27:49.496910 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:27:49 crc kubenswrapper[4988]: E1123 08:27:49.497521 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.217141 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerStarted","Data":"6ab40096a2ff9b385d629a1feb48ac924dbc47b101f22799666de4ede5330b09"} Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.217313 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-api" containerID="cri-o://10b3c4745801ddf84a06516af9b8c2b4235083c3730ea04eb3ce46caa8913b87" gracePeriod=30 Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.217477 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-notifier" containerID="cri-o://bc4a9e0545b68e1cce2e3346df0fca2623cf7dd3add31bdd1a668a55c1e294dc" gracePeriod=30 Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.217449 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-evaluator" containerID="cri-o://12ce64b9e0a6bad4e1fdadf09fd15fbebf6255d96d9d4292343b61c9e608be5d" gracePeriod=30 Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.217435 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-listener" containerID="cri-o://6ab40096a2ff9b385d629a1feb48ac924dbc47b101f22799666de4ede5330b09" gracePeriod=30 Nov 23 08:27:50 crc kubenswrapper[4988]: I1123 08:27:50.258094 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.711095566 podStartE2EDuration="6.258040893s" podCreationTimestamp="2025-11-23 08:27:44 +0000 UTC" firstStartedPulling="2025-11-23 08:27:45.286738352 +0000 UTC m=+6117.595251115" lastFinishedPulling="2025-11-23 08:27:49.833683679 +0000 UTC m=+6122.142196442" observedRunningTime="2025-11-23 08:27:50.252420846 +0000 UTC m=+6122.560933609" watchObservedRunningTime="2025-11-23 08:27:50.258040893 +0000 UTC m=+6122.566553676" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.245726 4988 generic.go:334] "Generic (PLEG): container finished" podID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerID="12ce64b9e0a6bad4e1fdadf09fd15fbebf6255d96d9d4292343b61c9e608be5d" exitCode=0 Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.246030 4988 generic.go:334] "Generic (PLEG): container finished" podID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerID="10b3c4745801ddf84a06516af9b8c2b4235083c3730ea04eb3ce46caa8913b87" exitCode=0 Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.245812 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerDied","Data":"12ce64b9e0a6bad4e1fdadf09fd15fbebf6255d96d9d4292343b61c9e608be5d"} Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.246070 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerDied","Data":"10b3c4745801ddf84a06516af9b8c2b4235083c3730ea04eb3ce46caa8913b87"} Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.712856 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890615 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890791 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890855 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890880 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890909 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890942 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.890972 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ssxm\" (UniqueName: \"kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm\") pod \"7f097e75-cc9d-4a21-bf36-1320565d2be7\" (UID: \"7f097e75-cc9d-4a21-bf36-1320565d2be7\") " Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.891170 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.891406 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.891698 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.891718 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f097e75-cc9d-4a21-bf36-1320565d2be7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.921555 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm" (OuterVolumeSpecName: "kube-api-access-7ssxm") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "kube-api-access-7ssxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.923715 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts" (OuterVolumeSpecName: "scripts") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.947670 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.994222 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.994278 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:51 crc kubenswrapper[4988]: I1123 08:27:51.994291 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ssxm\" (UniqueName: \"kubernetes.io/projected/7f097e75-cc9d-4a21-bf36-1320565d2be7-kube-api-access-7ssxm\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.020447 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data" (OuterVolumeSpecName: "config-data") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.024420 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f097e75-cc9d-4a21-bf36-1320565d2be7" (UID: "7f097e75-cc9d-4a21-bf36-1320565d2be7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.095611 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.095642 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f097e75-cc9d-4a21-bf36-1320565d2be7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.256535 4988 generic.go:334] "Generic (PLEG): container finished" podID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerID="eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de" exitCode=0 Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.256676 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerDied","Data":"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de"} Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.257497 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f097e75-cc9d-4a21-bf36-1320565d2be7","Type":"ContainerDied","Data":"f4efe781323d99bdb3b6167886787d86ca013b0b55c8c5a4cd4780ed0723c614"} Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.257521 4988 scope.go:117] "RemoveContainer" containerID="3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.256758 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.325496 4988 scope.go:117] "RemoveContainer" containerID="0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.336348 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.363873 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.377561 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.378047 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-notification-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378064 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-notification-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.378079 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="proxy-httpd" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378085 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="proxy-httpd" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.378102 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="sg-core" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378110 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="sg-core" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.378127 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-central-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378132 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-central-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378328 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="proxy-httpd" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378349 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-central-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378360 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="sg-core" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.378369 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" containerName="ceilometer-notification-agent" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.380853 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.391581 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.412770 4988 scope.go:117] "RemoveContainer" containerID="eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.413050 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.413105 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443456 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443531 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443625 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwtp\" (UniqueName: \"kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443653 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443682 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443718 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.443757 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.472325 4988 scope.go:117] "RemoveContainer" containerID="7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.491823 4988 scope.go:117] "RemoveContainer" containerID="3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.493271 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7\": container with ID starting with 3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7 not found: ID does not exist" containerID="3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.493305 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7"} err="failed to get container status \"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7\": rpc error: code = NotFound desc = could not find container \"3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7\": container with ID starting with 3cfa58d08b827681c4c8bf3e240ad5cc1e14c8c3f0aae7bb7148dc6b658bfeb7 not found: ID does not exist" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.493325 4988 scope.go:117] "RemoveContainer" containerID="0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.493904 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7\": container with ID starting with 0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7 not found: ID does not exist" containerID="0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.493927 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7"} err="failed to get container status \"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7\": rpc error: code = NotFound desc = could not find container \"0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7\": container with ID starting with 0b8747f51e2e2623ab75f93fd0a83f035bfc74f6d439dfbb3d9a4a39707fbfd7 not found: ID does not exist" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.493942 4988 scope.go:117] "RemoveContainer" containerID="eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.494276 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de\": container with ID starting with eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de not found: ID does not exist" containerID="eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.494296 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de"} err="failed to get container status \"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de\": rpc error: code = NotFound desc = could not find container \"eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de\": container with ID starting with eb96154518cf09c5ab71625f3d9721271ccffcbfb14801cd5b3f600f3df673de not found: ID does not exist" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.494308 4988 scope.go:117] "RemoveContainer" containerID="7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17" Nov 23 08:27:52 crc kubenswrapper[4988]: E1123 08:27:52.494640 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17\": container with ID starting with 7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17 not found: ID does not exist" containerID="7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.494661 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17"} err="failed to get container status \"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17\": rpc error: code = NotFound desc = could not find container \"7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17\": container with ID starting with 7418a20287f03a4954ceb124451f1a34cc2164039562631a87591cabd0435f17 not found: ID does not exist" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.508104 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f097e75-cc9d-4a21-bf36-1320565d2be7" path="/var/lib/kubelet/pods/7f097e75-cc9d-4a21-bf36-1320565d2be7/volumes" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545172 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545247 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545300 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545377 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545423 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdwtp\" (UniqueName: \"kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.545552 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.546833 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.547305 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.551269 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.551823 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.551872 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.554856 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.582998 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdwtp\" (UniqueName: \"kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp\") pod \"ceilometer-0\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " pod="openstack/ceilometer-0" Nov 23 08:27:52 crc kubenswrapper[4988]: I1123 08:27:52.756701 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:27:53 crc kubenswrapper[4988]: I1123 08:27:53.741829 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:27:54 crc kubenswrapper[4988]: I1123 08:27:54.278495 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerStarted","Data":"9e069b63b00fd73729b1617b6f5080e9ee842b2cf477447f900173da94d8d0b6"} Nov 23 08:27:54 crc kubenswrapper[4988]: I1123 08:27:54.278832 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerStarted","Data":"0b85b6152d5e1ccbbc9b1d743e6d1b2bedb6bbda7d94c0c27652844000057d4e"} Nov 23 08:27:55 crc kubenswrapper[4988]: I1123 08:27:55.289745 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerStarted","Data":"01747398afd20b1b09190b560ee88c436b56989522f9b167d7b8e824365f3995"} Nov 23 08:27:55 crc kubenswrapper[4988]: I1123 08:27:55.290209 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerStarted","Data":"c827bc0f9a30b8de52bd83c585c84adbd264a85b2982580007a86c6590bbb955"} Nov 23 08:27:57 crc kubenswrapper[4988]: I1123 08:27:57.311289 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerStarted","Data":"3cb8cde694b33b30cb3f9709a70d316a5a292d6a1cd5d838ae371e7ca0b0195d"} Nov 23 08:27:57 crc kubenswrapper[4988]: I1123 08:27:57.312762 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:27:57 crc kubenswrapper[4988]: I1123 08:27:57.367223 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.773294548 podStartE2EDuration="5.367180694s" podCreationTimestamp="2025-11-23 08:27:52 +0000 UTC" firstStartedPulling="2025-11-23 08:27:53.751852756 +0000 UTC m=+6126.060365519" lastFinishedPulling="2025-11-23 08:27:56.345738902 +0000 UTC m=+6128.654251665" observedRunningTime="2025-11-23 08:27:57.356177785 +0000 UTC m=+6129.664690568" watchObservedRunningTime="2025-11-23 08:27:57.367180694 +0000 UTC m=+6129.675693467" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.496022 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:28:00 crc kubenswrapper[4988]: E1123 08:28:00.496805 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.609811 4988 scope.go:117] "RemoveContainer" containerID="6c04aaf21d0216f6f27b0ad8fe623d539a0dd861624c1627444f2da3ffd63038" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.652046 4988 scope.go:117] "RemoveContainer" containerID="e1ddf054127b80092e1a5db2964fcf6d6feb96a6cadb5a47eee4e9d2198bb2d4" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.690371 4988 scope.go:117] "RemoveContainer" containerID="51bdd03fbeec896dea5693a259e5bcd80ea5e3e268475c0faf1c57add910ee6c" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.724524 4988 scope.go:117] "RemoveContainer" containerID="ef2fb16aa6147d0d367bdd34aa2176e584f005a973234e0b8002a227a117ac7d" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.765961 4988 scope.go:117] "RemoveContainer" containerID="56f27d4e11d78b4ccb0f647ca76cb167f7a0ad11b1384e7efe3cd00325cfd6e1" Nov 23 08:28:00 crc kubenswrapper[4988]: I1123 08:28:00.785404 4988 scope.go:117] "RemoveContainer" containerID="39bc3f3737af5376963346319441940e77494c946c081ececa8a85e6a81833a4" Nov 23 08:28:13 crc kubenswrapper[4988]: I1123 08:28:13.495819 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:28:13 crc kubenswrapper[4988]: E1123 08:28:13.496573 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.570661 4988 generic.go:334] "Generic (PLEG): container finished" podID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerID="6ab40096a2ff9b385d629a1feb48ac924dbc47b101f22799666de4ede5330b09" exitCode=137 Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.571135 4988 generic.go:334] "Generic (PLEG): container finished" podID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerID="bc4a9e0545b68e1cce2e3346df0fca2623cf7dd3add31bdd1a668a55c1e294dc" exitCode=137 Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.570758 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerDied","Data":"6ab40096a2ff9b385d629a1feb48ac924dbc47b101f22799666de4ede5330b09"} Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.571168 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerDied","Data":"bc4a9e0545b68e1cce2e3346df0fca2623cf7dd3add31bdd1a668a55c1e294dc"} Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.770559 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.897841 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data\") pod \"db43a0a7-39cd-4925-a5af-2d9cf760237d\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.897987 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts\") pod \"db43a0a7-39cd-4925-a5af-2d9cf760237d\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.898177 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle\") pod \"db43a0a7-39cd-4925-a5af-2d9cf760237d\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.898272 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6g7r\" (UniqueName: \"kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r\") pod \"db43a0a7-39cd-4925-a5af-2d9cf760237d\" (UID: \"db43a0a7-39cd-4925-a5af-2d9cf760237d\") " Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.903756 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts" (OuterVolumeSpecName: "scripts") pod "db43a0a7-39cd-4925-a5af-2d9cf760237d" (UID: "db43a0a7-39cd-4925-a5af-2d9cf760237d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:20 crc kubenswrapper[4988]: I1123 08:28:20.912607 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r" (OuterVolumeSpecName: "kube-api-access-t6g7r") pod "db43a0a7-39cd-4925-a5af-2d9cf760237d" (UID: "db43a0a7-39cd-4925-a5af-2d9cf760237d"). InnerVolumeSpecName "kube-api-access-t6g7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.000236 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.000274 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6g7r\" (UniqueName: \"kubernetes.io/projected/db43a0a7-39cd-4925-a5af-2d9cf760237d-kube-api-access-t6g7r\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.049064 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db43a0a7-39cd-4925-a5af-2d9cf760237d" (UID: "db43a0a7-39cd-4925-a5af-2d9cf760237d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.062224 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data" (OuterVolumeSpecName: "config-data") pod "db43a0a7-39cd-4925-a5af-2d9cf760237d" (UID: "db43a0a7-39cd-4925-a5af-2d9cf760237d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.102127 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.102162 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db43a0a7-39cd-4925-a5af-2d9cf760237d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.588896 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"db43a0a7-39cd-4925-a5af-2d9cf760237d","Type":"ContainerDied","Data":"b2acecaa7324f7951b92114355cfaab2534f021e68fa374dfb5f594ffd438e11"} Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.588979 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.589294 4988 scope.go:117] "RemoveContainer" containerID="6ab40096a2ff9b385d629a1feb48ac924dbc47b101f22799666de4ede5330b09" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.638017 4988 scope.go:117] "RemoveContainer" containerID="bc4a9e0545b68e1cce2e3346df0fca2623cf7dd3add31bdd1a668a55c1e294dc" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.654880 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.681756 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.690016 4988 scope.go:117] "RemoveContainer" containerID="12ce64b9e0a6bad4e1fdadf09fd15fbebf6255d96d9d4292343b61c9e608be5d" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.691987 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 23 08:28:21 crc kubenswrapper[4988]: E1123 08:28:21.692667 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-notifier" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.692756 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-notifier" Nov 23 08:28:21 crc kubenswrapper[4988]: E1123 08:28:21.692846 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-api" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.692907 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-api" Nov 23 08:28:21 crc kubenswrapper[4988]: E1123 08:28:21.692992 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-listener" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693076 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-listener" Nov 23 08:28:21 crc kubenswrapper[4988]: E1123 08:28:21.693155 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-evaluator" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693243 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-evaluator" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693544 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-api" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693619 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-evaluator" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693705 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-listener" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.693788 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" containerName="aodh-notifier" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.696130 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.698703 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.703088 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.703291 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.703454 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.703475 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.703508 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-x9fjg" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.725248 4988 scope.go:117] "RemoveContainer" containerID="10b3c4745801ddf84a06516af9b8c2b4235083c3730ea04eb3ce46caa8913b87" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817112 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817166 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlzc\" (UniqueName: \"kubernetes.io/projected/ad8a49ac-30cb-4638-b369-fa9afad39287-kube-api-access-cqlzc\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817221 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-config-data\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817298 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-scripts\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817333 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-public-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.817370 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-internal-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918790 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918840 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlzc\" (UniqueName: \"kubernetes.io/projected/ad8a49ac-30cb-4638-b369-fa9afad39287-kube-api-access-cqlzc\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918876 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-config-data\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918915 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-scripts\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918940 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-public-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.918970 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-internal-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.923696 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-internal-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.924230 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-scripts\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.924647 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.927881 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-public-tls-certs\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.934889 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad8a49ac-30cb-4638-b369-fa9afad39287-config-data\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:21 crc kubenswrapper[4988]: I1123 08:28:21.935167 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlzc\" (UniqueName: \"kubernetes.io/projected/ad8a49ac-30cb-4638-b369-fa9afad39287-kube-api-access-cqlzc\") pod \"aodh-0\" (UID: \"ad8a49ac-30cb-4638-b369-fa9afad39287\") " pod="openstack/aodh-0" Nov 23 08:28:22 crc kubenswrapper[4988]: I1123 08:28:22.034512 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:28:22 crc kubenswrapper[4988]: I1123 08:28:22.521610 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db43a0a7-39cd-4925-a5af-2d9cf760237d" path="/var/lib/kubelet/pods/db43a0a7-39cd-4925-a5af-2d9cf760237d/volumes" Nov 23 08:28:22 crc kubenswrapper[4988]: I1123 08:28:22.565132 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:28:22 crc kubenswrapper[4988]: I1123 08:28:22.601727 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ad8a49ac-30cb-4638-b369-fa9afad39287","Type":"ContainerStarted","Data":"957f78eb701e87ef4d14567e0c52955564912c6ac5418bbd9159e48446259f0c"} Nov 23 08:28:22 crc kubenswrapper[4988]: I1123 08:28:22.762303 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:28:23 crc kubenswrapper[4988]: I1123 08:28:23.619784 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ad8a49ac-30cb-4638-b369-fa9afad39287","Type":"ContainerStarted","Data":"ea4654f58395603a7f96db24b47ef667e8e5065353d56a9f95824721ad09b967"} Nov 23 08:28:23 crc kubenswrapper[4988]: I1123 08:28:23.620454 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ad8a49ac-30cb-4638-b369-fa9afad39287","Type":"ContainerStarted","Data":"5e8071d81325579928d8c70b52ac4574d6352837e85da9fe806dc4c3c9f06cae"} Nov 23 08:28:24 crc kubenswrapper[4988]: I1123 08:28:24.496613 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:28:24 crc kubenswrapper[4988]: I1123 08:28:24.648687 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ad8a49ac-30cb-4638-b369-fa9afad39287","Type":"ContainerStarted","Data":"ca3e8ffe58ca256daba0d1371d42e71ccef5ac38d84dda3fb87dbf5926a9bccf"} Nov 23 08:28:24 crc kubenswrapper[4988]: I1123 08:28:24.649787 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ad8a49ac-30cb-4638-b369-fa9afad39287","Type":"ContainerStarted","Data":"d4b34213d12c30229ddf01f17aa7129fd0688000c37edc3606c333ba5e08e11a"} Nov 23 08:28:24 crc kubenswrapper[4988]: I1123 08:28:24.692308 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.235558794 podStartE2EDuration="3.692290496s" podCreationTimestamp="2025-11-23 08:28:21 +0000 UTC" firstStartedPulling="2025-11-23 08:28:22.577424697 +0000 UTC m=+6154.885937460" lastFinishedPulling="2025-11-23 08:28:24.034156399 +0000 UTC m=+6156.342669162" observedRunningTime="2025-11-23 08:28:24.67277204 +0000 UTC m=+6156.981284813" watchObservedRunningTime="2025-11-23 08:28:24.692290496 +0000 UTC m=+6157.000803259" Nov 23 08:28:25 crc kubenswrapper[4988]: I1123 08:28:25.665306 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23"} Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.041141 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.043084 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.047415 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.073405 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145080 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8426m\" (UniqueName: \"kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145163 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145310 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145663 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145884 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.145941 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.247867 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.247949 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.247971 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.248023 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8426m\" (UniqueName: \"kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.248068 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.248089 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.249059 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.249190 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.249371 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.249488 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.249691 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.269050 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8426m\" (UniqueName: \"kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m\") pod \"dnsmasq-dns-867587c497-n25bb\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.385280 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.660837 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.661669 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerName="kube-state-metrics" containerID="cri-o://03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84" gracePeriod=30 Nov 23 08:28:27 crc kubenswrapper[4988]: I1123 08:28:27.873108 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.080458 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.166474 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkqz9\" (UniqueName: \"kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9\") pod \"1a64aea4-b7f0-4a4f-971c-3312892fe956\" (UID: \"1a64aea4-b7f0-4a4f-971c-3312892fe956\") " Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.182416 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9" (OuterVolumeSpecName: "kube-api-access-dkqz9") pod "1a64aea4-b7f0-4a4f-971c-3312892fe956" (UID: "1a64aea4-b7f0-4a4f-971c-3312892fe956"). InnerVolumeSpecName "kube-api-access-dkqz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.273553 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkqz9\" (UniqueName: \"kubernetes.io/projected/1a64aea4-b7f0-4a4f-971c-3312892fe956-kube-api-access-dkqz9\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.708662 4988 generic.go:334] "Generic (PLEG): container finished" podID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerID="03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84" exitCode=2 Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.708793 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a64aea4-b7f0-4a4f-971c-3312892fe956","Type":"ContainerDied","Data":"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84"} Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.708858 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1a64aea4-b7f0-4a4f-971c-3312892fe956","Type":"ContainerDied","Data":"368b597d77323f807feee6c0f725fbd4aa28e65bf95ce8dbd9001b6785237233"} Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.708907 4988 scope.go:117] "RemoveContainer" containerID="03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.709165 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.712086 4988 generic.go:334] "Generic (PLEG): container finished" podID="025edc07-0e90-48bf-957e-8842578e17d1" containerID="6ee1067fccbed43db6ec288fca6cc8e065f7f887bf5f31451a8189c9ee1d8651" exitCode=0 Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.712134 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867587c497-n25bb" event={"ID":"025edc07-0e90-48bf-957e-8842578e17d1","Type":"ContainerDied","Data":"6ee1067fccbed43db6ec288fca6cc8e065f7f887bf5f31451a8189c9ee1d8651"} Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.712167 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867587c497-n25bb" event={"ID":"025edc07-0e90-48bf-957e-8842578e17d1","Type":"ContainerStarted","Data":"b45717b22a4125fc2d1803dd0bba8ef56024b253a2a2f1d5f35a1b9c90e2fa6c"} Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.804690 4988 scope.go:117] "RemoveContainer" containerID="03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84" Nov 23 08:28:28 crc kubenswrapper[4988]: E1123 08:28:28.806970 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84\": container with ID starting with 03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84 not found: ID does not exist" containerID="03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.807031 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84"} err="failed to get container status \"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84\": rpc error: code = NotFound desc = could not find container \"03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84\": container with ID starting with 03f140a0ba305dabc5a424a09cf6f35b02b375323d64f6973bccbc9376934c84 not found: ID does not exist" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.855787 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.870492 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.889506 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:28 crc kubenswrapper[4988]: E1123 08:28:28.890044 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerName="kube-state-metrics" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.890062 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerName="kube-state-metrics" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.891003 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerName="kube-state-metrics" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.891840 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.896232 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.896424 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.903715 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.987875 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.987926 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlrh\" (UniqueName: \"kubernetes.io/projected/8de623af-ca19-44fc-a166-091a0977bd5d-kube-api-access-fjlrh\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.988019 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:28 crc kubenswrapper[4988]: I1123 08:28:28.988231 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.090431 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.091846 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.092018 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.092120 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlrh\" (UniqueName: \"kubernetes.io/projected/8de623af-ca19-44fc-a166-091a0977bd5d-kube-api-access-fjlrh\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.095249 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.098444 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.099442 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de623af-ca19-44fc-a166-091a0977bd5d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.110915 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlrh\" (UniqueName: \"kubernetes.io/projected/8de623af-ca19-44fc-a166-091a0977bd5d-kube-api-access-fjlrh\") pod \"kube-state-metrics-0\" (UID: \"8de623af-ca19-44fc-a166-091a0977bd5d\") " pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.274912 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.472176 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.474102 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-central-agent" containerID="cri-o://9e069b63b00fd73729b1617b6f5080e9ee842b2cf477447f900173da94d8d0b6" gracePeriod=30 Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.474621 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="proxy-httpd" containerID="cri-o://3cb8cde694b33b30cb3f9709a70d316a5a292d6a1cd5d838ae371e7ca0b0195d" gracePeriod=30 Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.474638 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-notification-agent" containerID="cri-o://c827bc0f9a30b8de52bd83c585c84adbd264a85b2982580007a86c6590bbb955" gracePeriod=30 Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.474620 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="sg-core" containerID="cri-o://01747398afd20b1b09190b560ee88c436b56989522f9b167d7b8e824365f3995" gracePeriod=30 Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.723674 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867587c497-n25bb" event={"ID":"025edc07-0e90-48bf-957e-8842578e17d1","Type":"ContainerStarted","Data":"cc6156b2055bc8666595e72eed050a7bb97f14e171e51395fa7fc0698d44ed3a"} Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.723821 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.742849 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:28:29 crc kubenswrapper[4988]: W1123 08:28:29.753827 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8de623af_ca19_44fc_a166_091a0977bd5d.slice/crio-2dcc96211b60df793722d742759cd91890ca461fcdc6e0afe12ab415d1d11981 WatchSource:0}: Error finding container 2dcc96211b60df793722d742759cd91890ca461fcdc6e0afe12ab415d1d11981: Status 404 returned error can't find the container with id 2dcc96211b60df793722d742759cd91890ca461fcdc6e0afe12ab415d1d11981 Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.757184 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:28:29 crc kubenswrapper[4988]: I1123 08:28:29.776737 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-867587c497-n25bb" podStartSLOduration=2.776717257 podStartE2EDuration="2.776717257s" podCreationTimestamp="2025-11-23 08:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:28:29.774899263 +0000 UTC m=+6162.083412026" watchObservedRunningTime="2025-11-23 08:28:29.776717257 +0000 UTC m=+6162.085230020" Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.526070 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" path="/var/lib/kubelet/pods/1a64aea4-b7f0-4a4f-971c-3312892fe956/volumes" Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.738374 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8de623af-ca19-44fc-a166-091a0977bd5d","Type":"ContainerStarted","Data":"4531061ccd9c2a92de3f3a7978baa12c94905e5b1f903bb5f5645f5090592214"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.738939 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.738991 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8de623af-ca19-44fc-a166-091a0977bd5d","Type":"ContainerStarted","Data":"2dcc96211b60df793722d742759cd91890ca461fcdc6e0afe12ab415d1d11981"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743452 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d8010d9-0226-491d-80be-fc58006abaaa" containerID="3cb8cde694b33b30cb3f9709a70d316a5a292d6a1cd5d838ae371e7ca0b0195d" exitCode=0 Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743493 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d8010d9-0226-491d-80be-fc58006abaaa" containerID="01747398afd20b1b09190b560ee88c436b56989522f9b167d7b8e824365f3995" exitCode=2 Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743512 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d8010d9-0226-491d-80be-fc58006abaaa" containerID="c827bc0f9a30b8de52bd83c585c84adbd264a85b2982580007a86c6590bbb955" exitCode=0 Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743535 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerDied","Data":"3cb8cde694b33b30cb3f9709a70d316a5a292d6a1cd5d838ae371e7ca0b0195d"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743530 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d8010d9-0226-491d-80be-fc58006abaaa" containerID="9e069b63b00fd73729b1617b6f5080e9ee842b2cf477447f900173da94d8d0b6" exitCode=0 Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743586 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerDied","Data":"01747398afd20b1b09190b560ee88c436b56989522f9b167d7b8e824365f3995"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.743608 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerDied","Data":"c827bc0f9a30b8de52bd83c585c84adbd264a85b2982580007a86c6590bbb955"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.749127 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerDied","Data":"9e069b63b00fd73729b1617b6f5080e9ee842b2cf477447f900173da94d8d0b6"} Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.773838 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.353208923 podStartE2EDuration="2.773815425s" podCreationTimestamp="2025-11-23 08:28:28 +0000 UTC" firstStartedPulling="2025-11-23 08:28:29.756826912 +0000 UTC m=+6162.065339675" lastFinishedPulling="2025-11-23 08:28:30.177433424 +0000 UTC m=+6162.485946177" observedRunningTime="2025-11-23 08:28:30.761685799 +0000 UTC m=+6163.070198602" watchObservedRunningTime="2025-11-23 08:28:30.773815425 +0000 UTC m=+6163.082328198" Nov 23 08:28:30 crc kubenswrapper[4988]: I1123 08:28:30.989329 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.046654 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.046735 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.046768 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdwtp\" (UniqueName: \"kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.046849 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.046957 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.047883 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.047947 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd\") pod \"3d8010d9-0226-491d-80be-fc58006abaaa\" (UID: \"3d8010d9-0226-491d-80be-fc58006abaaa\") " Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.048391 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.048879 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.048951 4988 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.053529 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp" (OuterVolumeSpecName: "kube-api-access-kdwtp") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "kube-api-access-kdwtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.053546 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts" (OuterVolumeSpecName: "scripts") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.099782 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.150414 4988 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.150457 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdwtp\" (UniqueName: \"kubernetes.io/projected/3d8010d9-0226-491d-80be-fc58006abaaa-kube-api-access-kdwtp\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.150474 4988 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.150486 4988 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d8010d9-0226-491d-80be-fc58006abaaa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.170445 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data" (OuterVolumeSpecName: "config-data") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.170870 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d8010d9-0226-491d-80be-fc58006abaaa" (UID: "3d8010d9-0226-491d-80be-fc58006abaaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.252775 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.252813 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8010d9-0226-491d-80be-fc58006abaaa-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.757517 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.759882 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d8010d9-0226-491d-80be-fc58006abaaa","Type":"ContainerDied","Data":"0b85b6152d5e1ccbbc9b1d743e6d1b2bedb6bbda7d94c0c27652844000057d4e"} Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.760112 4988 scope.go:117] "RemoveContainer" containerID="3cb8cde694b33b30cb3f9709a70d316a5a292d6a1cd5d838ae371e7ca0b0195d" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.792832 4988 scope.go:117] "RemoveContainer" containerID="01747398afd20b1b09190b560ee88c436b56989522f9b167d7b8e824365f3995" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.802345 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.824229 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.844493 4988 scope.go:117] "RemoveContainer" containerID="c827bc0f9a30b8de52bd83c585c84adbd264a85b2982580007a86c6590bbb955" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.847449 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:31 crc kubenswrapper[4988]: E1123 08:28:31.847900 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="proxy-httpd" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.847917 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="proxy-httpd" Nov 23 08:28:31 crc kubenswrapper[4988]: E1123 08:28:31.847927 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-central-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.847934 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-central-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: E1123 08:28:31.847949 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="sg-core" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.847954 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="sg-core" Nov 23 08:28:31 crc kubenswrapper[4988]: E1123 08:28:31.847989 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-notification-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.847999 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-notification-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.848389 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="proxy-httpd" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.848437 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-notification-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.848550 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="sg-core" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.848568 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" containerName="ceilometer-central-agent" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.852970 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.855149 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.855621 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.855655 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.856547 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864173 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864313 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2hrh\" (UniqueName: \"kubernetes.io/projected/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-kube-api-access-d2hrh\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864358 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-config-data\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864438 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864495 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-run-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864561 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-log-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864595 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-scripts\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.864676 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.890956 4988 scope.go:117] "RemoveContainer" containerID="9e069b63b00fd73729b1617b6f5080e9ee842b2cf477447f900173da94d8d0b6" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.966773 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-log-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.966846 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-scripts\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.966919 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.966951 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.967042 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2hrh\" (UniqueName: \"kubernetes.io/projected/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-kube-api-access-d2hrh\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.967087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-config-data\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.967127 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.967154 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-run-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.968748 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-log-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.968987 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-run-httpd\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.972495 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.972844 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-scripts\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.973292 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.973336 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.975645 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-config-data\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:31 crc kubenswrapper[4988]: I1123 08:28:31.996111 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2hrh\" (UniqueName: \"kubernetes.io/projected/dce5c5d3-bdb1-4f2a-a12d-9e093f47262a-kube-api-access-d2hrh\") pod \"ceilometer-0\" (UID: \"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a\") " pod="openstack/ceilometer-0" Nov 23 08:28:32 crc kubenswrapper[4988]: I1123 08:28:32.181711 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:28:32 crc kubenswrapper[4988]: I1123 08:28:32.509865 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8010d9-0226-491d-80be-fc58006abaaa" path="/var/lib/kubelet/pods/3d8010d9-0226-491d-80be-fc58006abaaa/volumes" Nov 23 08:28:32 crc kubenswrapper[4988]: I1123 08:28:32.667691 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:28:32 crc kubenswrapper[4988]: W1123 08:28:32.670010 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddce5c5d3_bdb1_4f2a_a12d_9e093f47262a.slice/crio-603718f7a84bd5c11ba42dca73b75963f50687c0bda38ea29349ccce379f3dd8 WatchSource:0}: Error finding container 603718f7a84bd5c11ba42dca73b75963f50687c0bda38ea29349ccce379f3dd8: Status 404 returned error can't find the container with id 603718f7a84bd5c11ba42dca73b75963f50687c0bda38ea29349ccce379f3dd8 Nov 23 08:28:32 crc kubenswrapper[4988]: I1123 08:28:32.766884 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a","Type":"ContainerStarted","Data":"603718f7a84bd5c11ba42dca73b75963f50687c0bda38ea29349ccce379f3dd8"} Nov 23 08:28:32 crc kubenswrapper[4988]: I1123 08:28:32.928716 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1a64aea4-b7f0-4a4f-971c-3312892fe956" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:28:33 crc kubenswrapper[4988]: I1123 08:28:33.785448 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a","Type":"ContainerStarted","Data":"8b62bb0b1a321d6fd17e3d73e6ec3e51e7b2fa76fb40c43f317503b011dfb88d"} Nov 23 08:28:34 crc kubenswrapper[4988]: I1123 08:28:34.795902 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a","Type":"ContainerStarted","Data":"3a56ced0870056bb1b1881357d6b0df2b6932d2cb6a729a861d41633c6e3d01e"} Nov 23 08:28:34 crc kubenswrapper[4988]: I1123 08:28:34.796265 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a","Type":"ContainerStarted","Data":"b83917a0925b8e7f543655280c7dc5f7b79b95dd7aa52205aab270d9449c2544"} Nov 23 08:28:36 crc kubenswrapper[4988]: I1123 08:28:36.829439 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dce5c5d3-bdb1-4f2a-a12d-9e093f47262a","Type":"ContainerStarted","Data":"5a9df588f76e98359a7e132c84efc182989acca9778c1f1030a15f313e4e669f"} Nov 23 08:28:36 crc kubenswrapper[4988]: I1123 08:28:36.830106 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:28:36 crc kubenswrapper[4988]: I1123 08:28:36.856340 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.608081456 podStartE2EDuration="5.856321187s" podCreationTimestamp="2025-11-23 08:28:31 +0000 UTC" firstStartedPulling="2025-11-23 08:28:32.672404817 +0000 UTC m=+6164.980917600" lastFinishedPulling="2025-11-23 08:28:35.920644568 +0000 UTC m=+6168.229157331" observedRunningTime="2025-11-23 08:28:36.855395325 +0000 UTC m=+6169.163908128" watchObservedRunningTime="2025-11-23 08:28:36.856321187 +0000 UTC m=+6169.164833950" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.044503 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7164-account-create-4hnhx"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.055422 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p7456"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.066395 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p7456"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.076101 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7164-account-create-4hnhx"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.387446 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.455025 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.456348 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54d57df949-wlqch" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="dnsmasq-dns" containerID="cri-o://95616d463b906dcc43bcaec1a64844fa42fc36325d4eb4c48b0eb95f5994fa60" gracePeriod=10 Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.622558 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fb59c9d47-q8wcr"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.627348 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.658047 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb59c9d47-q8wcr"] Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.809458 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-config\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.809848 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-openstack-cell1\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.809934 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-dns-svc\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.810038 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.810106 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cmnh\" (UniqueName: \"kubernetes.io/projected/7adc1cbc-f014-4666-b8ca-0400120c3c3e-kube-api-access-7cmnh\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.810183 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.849438 4988 generic.go:334] "Generic (PLEG): container finished" podID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerID="95616d463b906dcc43bcaec1a64844fa42fc36325d4eb4c48b0eb95f5994fa60" exitCode=0 Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.850397 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d57df949-wlqch" event={"ID":"32c53be8-ebd9-4e99-8395-c93637b97a50","Type":"ContainerDied","Data":"95616d463b906dcc43bcaec1a64844fa42fc36325d4eb4c48b0eb95f5994fa60"} Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.911913 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-config\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.912048 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-openstack-cell1\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.912226 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-dns-svc\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.912387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.912501 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cmnh\" (UniqueName: \"kubernetes.io/projected/7adc1cbc-f014-4666-b8ca-0400120c3c3e-kube-api-access-7cmnh\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.912549 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.913170 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-config\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.913206 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-openstack-cell1\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.913959 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-dns-svc\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.913993 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.914432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7adc1cbc-f014-4666-b8ca-0400120c3c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.931534 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cmnh\" (UniqueName: \"kubernetes.io/projected/7adc1cbc-f014-4666-b8ca-0400120c3c3e-kube-api-access-7cmnh\") pod \"dnsmasq-dns-6fb59c9d47-q8wcr\" (UID: \"7adc1cbc-f014-4666-b8ca-0400120c3c3e\") " pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:37 crc kubenswrapper[4988]: I1123 08:28:37.982669 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.085430 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.116861 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfgm7\" (UniqueName: \"kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7\") pod \"32c53be8-ebd9-4e99-8395-c93637b97a50\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.116918 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb\") pod \"32c53be8-ebd9-4e99-8395-c93637b97a50\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.116991 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config\") pod \"32c53be8-ebd9-4e99-8395-c93637b97a50\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.117018 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb\") pod \"32c53be8-ebd9-4e99-8395-c93637b97a50\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.117073 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc\") pod \"32c53be8-ebd9-4e99-8395-c93637b97a50\" (UID: \"32c53be8-ebd9-4e99-8395-c93637b97a50\") " Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.140410 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7" (OuterVolumeSpecName: "kube-api-access-kfgm7") pod "32c53be8-ebd9-4e99-8395-c93637b97a50" (UID: "32c53be8-ebd9-4e99-8395-c93637b97a50"). InnerVolumeSpecName "kube-api-access-kfgm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.211327 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32c53be8-ebd9-4e99-8395-c93637b97a50" (UID: "32c53be8-ebd9-4e99-8395-c93637b97a50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.215812 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32c53be8-ebd9-4e99-8395-c93637b97a50" (UID: "32c53be8-ebd9-4e99-8395-c93637b97a50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.222155 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.222186 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.222214 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfgm7\" (UniqueName: \"kubernetes.io/projected/32c53be8-ebd9-4e99-8395-c93637b97a50-kube-api-access-kfgm7\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.233815 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32c53be8-ebd9-4e99-8395-c93637b97a50" (UID: "32c53be8-ebd9-4e99-8395-c93637b97a50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.257399 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config" (OuterVolumeSpecName: "config") pod "32c53be8-ebd9-4e99-8395-c93637b97a50" (UID: "32c53be8-ebd9-4e99-8395-c93637b97a50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.324466 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.324495 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32c53be8-ebd9-4e99-8395-c93637b97a50-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.478338 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb59c9d47-q8wcr"] Nov 23 08:28:38 crc kubenswrapper[4988]: W1123 08:28:38.493568 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7adc1cbc_f014_4666_b8ca_0400120c3c3e.slice/crio-9592cec6204cb8a5e08af175f5bc93ed1a8b2879bfd5a0680d427cfc12c10724 WatchSource:0}: Error finding container 9592cec6204cb8a5e08af175f5bc93ed1a8b2879bfd5a0680d427cfc12c10724: Status 404 returned error can't find the container with id 9592cec6204cb8a5e08af175f5bc93ed1a8b2879bfd5a0680d427cfc12c10724 Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.510616 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4" path="/var/lib/kubelet/pods/6ec87dde-30c6-42d0-b3ec-aa61bc52a9a4/volumes" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.512023 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ce86e7-aef9-4c27-9b85-1705b1ef877d" path="/var/lib/kubelet/pods/78ce86e7-aef9-4c27-9b85-1705b1ef877d/volumes" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.862026 4988 generic.go:334] "Generic (PLEG): container finished" podID="7adc1cbc-f014-4666-b8ca-0400120c3c3e" containerID="90a53c007b830ee07a6d8800e09cbecf9690f221efc4ee7187bdc7a09dcf7ce0" exitCode=0 Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.862185 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" event={"ID":"7adc1cbc-f014-4666-b8ca-0400120c3c3e","Type":"ContainerDied","Data":"90a53c007b830ee07a6d8800e09cbecf9690f221efc4ee7187bdc7a09dcf7ce0"} Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.862379 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" event={"ID":"7adc1cbc-f014-4666-b8ca-0400120c3c3e","Type":"ContainerStarted","Data":"9592cec6204cb8a5e08af175f5bc93ed1a8b2879bfd5a0680d427cfc12c10724"} Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.864165 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d57df949-wlqch" event={"ID":"32c53be8-ebd9-4e99-8395-c93637b97a50","Type":"ContainerDied","Data":"c53c55a20c7687e752e0df17f8ef8536ba70a1f85b1e5a1dca863544e1f6cdd2"} Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.864226 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d57df949-wlqch" Nov 23 08:28:38 crc kubenswrapper[4988]: I1123 08:28:38.864236 4988 scope.go:117] "RemoveContainer" containerID="95616d463b906dcc43bcaec1a64844fa42fc36325d4eb4c48b0eb95f5994fa60" Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.070961 4988 scope.go:117] "RemoveContainer" containerID="d9fbe7927c6a2a72fb39e808a60b805552e358fc9883260774564fc1fef1980b" Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.086215 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.094738 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54d57df949-wlqch"] Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.287406 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.876998 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" event={"ID":"7adc1cbc-f014-4666-b8ca-0400120c3c3e","Type":"ContainerStarted","Data":"d4c0c079ecddd9e76664f45cb8751f98261e6db3d9d63b416bc4e4b00039295f"} Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.877566 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:39 crc kubenswrapper[4988]: I1123 08:28:39.915751 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" podStartSLOduration=2.915733032 podStartE2EDuration="2.915733032s" podCreationTimestamp="2025-11-23 08:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:28:39.898574123 +0000 UTC m=+6172.207086916" watchObservedRunningTime="2025-11-23 08:28:39.915733032 +0000 UTC m=+6172.224245785" Nov 23 08:28:40 crc kubenswrapper[4988]: I1123 08:28:40.513304 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" path="/var/lib/kubelet/pods/32c53be8-ebd9-4e99-8395-c93637b97a50/volumes" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.470975 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz"] Nov 23 08:28:43 crc kubenswrapper[4988]: E1123 08:28:43.471806 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="dnsmasq-dns" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.471818 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="dnsmasq-dns" Nov 23 08:28:43 crc kubenswrapper[4988]: E1123 08:28:43.471833 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="init" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.471839 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="init" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.472022 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c53be8-ebd9-4e99-8395-c93637b97a50" containerName="dnsmasq-dns" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.472691 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.483583 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.483889 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.483955 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.497652 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.506790 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz"] Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.548155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.548284 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zk5c\" (UniqueName: \"kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.548417 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.548549 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.649998 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.650346 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zk5c\" (UniqueName: \"kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.650710 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.650772 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.657056 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.660755 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.660840 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.669910 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zk5c\" (UniqueName: \"kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:43 crc kubenswrapper[4988]: I1123 08:28:43.789636 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:28:44 crc kubenswrapper[4988]: I1123 08:28:44.555555 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz"] Nov 23 08:28:44 crc kubenswrapper[4988]: W1123 08:28:44.562602 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d55647b_8db7_4352_aba2_c1bc67c744e0.slice/crio-6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616 WatchSource:0}: Error finding container 6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616: Status 404 returned error can't find the container with id 6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616 Nov 23 08:28:44 crc kubenswrapper[4988]: I1123 08:28:44.936386 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" event={"ID":"1d55647b-8db7-4352-aba2-c1bc67c744e0","Type":"ContainerStarted","Data":"6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616"} Nov 23 08:28:47 crc kubenswrapper[4988]: I1123 08:28:47.985365 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fb59c9d47-q8wcr" Nov 23 08:28:48 crc kubenswrapper[4988]: I1123 08:28:48.046772 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:48 crc kubenswrapper[4988]: I1123 08:28:48.047036 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-867587c497-n25bb" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="dnsmasq-dns" containerID="cri-o://cc6156b2055bc8666595e72eed050a7bb97f14e171e51395fa7fc0698d44ed3a" gracePeriod=10 Nov 23 08:28:48 crc kubenswrapper[4988]: I1123 08:28:48.978655 4988 generic.go:334] "Generic (PLEG): container finished" podID="025edc07-0e90-48bf-957e-8842578e17d1" containerID="cc6156b2055bc8666595e72eed050a7bb97f14e171e51395fa7fc0698d44ed3a" exitCode=0 Nov 23 08:28:48 crc kubenswrapper[4988]: I1123 08:28:48.978735 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867587c497-n25bb" event={"ID":"025edc07-0e90-48bf-957e-8842578e17d1","Type":"ContainerDied","Data":"cc6156b2055bc8666595e72eed050a7bb97f14e171e51395fa7fc0698d44ed3a"} Nov 23 08:28:52 crc kubenswrapper[4988]: I1123 08:28:52.386911 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-867587c497-n25bb" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.134:5353: connect: connection refused" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.374922 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.657027 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695127 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695222 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695465 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8426m\" (UniqueName: \"kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695517 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695567 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.695590 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1\") pod \"025edc07-0e90-48bf-957e-8842578e17d1\" (UID: \"025edc07-0e90-48bf-957e-8842578e17d1\") " Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.701496 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m" (OuterVolumeSpecName: "kube-api-access-8426m") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "kube-api-access-8426m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.751938 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config" (OuterVolumeSpecName: "config") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.765616 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.765981 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.771618 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.772802 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "025edc07-0e90-48bf-957e-8842578e17d1" (UID: "025edc07-0e90-48bf-957e-8842578e17d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796820 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8426m\" (UniqueName: \"kubernetes.io/projected/025edc07-0e90-48bf-957e-8842578e17d1-kube-api-access-8426m\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796855 4988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796866 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796874 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796882 4988 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:54 crc kubenswrapper[4988]: I1123 08:28:54.796891 4988 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/025edc07-0e90-48bf-957e-8842578e17d1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.044682 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867587c497-n25bb" event={"ID":"025edc07-0e90-48bf-957e-8842578e17d1","Type":"ContainerDied","Data":"b45717b22a4125fc2d1803dd0bba8ef56024b253a2a2f1d5f35a1b9c90e2fa6c"} Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.044709 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867587c497-n25bb" Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.044747 4988 scope.go:117] "RemoveContainer" containerID="cc6156b2055bc8666595e72eed050a7bb97f14e171e51395fa7fc0698d44ed3a" Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.047098 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" event={"ID":"1d55647b-8db7-4352-aba2-c1bc67c744e0","Type":"ContainerStarted","Data":"b866e9e2863d61f526e724f627d3a09337f968c80308978444aa8ce2bb54ecd4"} Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.069728 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" podStartSLOduration=2.262850742 podStartE2EDuration="12.069710241s" podCreationTimestamp="2025-11-23 08:28:43 +0000 UTC" firstStartedPulling="2025-11-23 08:28:44.564311369 +0000 UTC m=+6176.872824132" lastFinishedPulling="2025-11-23 08:28:54.371170868 +0000 UTC m=+6186.679683631" observedRunningTime="2025-11-23 08:28:55.064078294 +0000 UTC m=+6187.372591067" watchObservedRunningTime="2025-11-23 08:28:55.069710241 +0000 UTC m=+6187.378223004" Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.089709 4988 scope.go:117] "RemoveContainer" containerID="6ee1067fccbed43db6ec288fca6cc8e065f7f887bf5f31451a8189c9ee1d8651" Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.089863 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:55 crc kubenswrapper[4988]: I1123 08:28:55.102921 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-867587c497-n25bb"] Nov 23 08:28:56 crc kubenswrapper[4988]: I1123 08:28:56.511405 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025edc07-0e90-48bf-957e-8842578e17d1" path="/var/lib/kubelet/pods/025edc07-0e90-48bf-957e-8842578e17d1/volumes" Nov 23 08:29:01 crc kubenswrapper[4988]: I1123 08:29:01.008577 4988 scope.go:117] "RemoveContainer" containerID="91c3804eaa3ea79ba9fcb4241239a1762f5e9853007aaa498ea565ac860a3634" Nov 23 08:29:01 crc kubenswrapper[4988]: I1123 08:29:01.064423 4988 scope.go:117] "RemoveContainer" containerID="26a1fc8cf72e69eade4d0d162c378138f3dc6f9eb58e29a617ee5446a8f33839" Nov 23 08:29:02 crc kubenswrapper[4988]: I1123 08:29:02.198009 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:29:03 crc kubenswrapper[4988]: I1123 08:29:03.033565 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-k9qj7"] Nov 23 08:29:03 crc kubenswrapper[4988]: I1123 08:29:03.042250 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-k9qj7"] Nov 23 08:29:04 crc kubenswrapper[4988]: I1123 08:29:04.507491 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775678bf-5721-4f4b-ac7d-f4ff4982a94d" path="/var/lib/kubelet/pods/775678bf-5721-4f4b-ac7d-f4ff4982a94d/volumes" Nov 23 08:29:08 crc kubenswrapper[4988]: I1123 08:29:08.189036 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d55647b-8db7-4352-aba2-c1bc67c744e0" containerID="b866e9e2863d61f526e724f627d3a09337f968c80308978444aa8ce2bb54ecd4" exitCode=0 Nov 23 08:29:08 crc kubenswrapper[4988]: I1123 08:29:08.189133 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" event={"ID":"1d55647b-8db7-4352-aba2-c1bc67c744e0","Type":"ContainerDied","Data":"b866e9e2863d61f526e724f627d3a09337f968c80308978444aa8ce2bb54ecd4"} Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.616986 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.634382 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory\") pod \"1d55647b-8db7-4352-aba2-c1bc67c744e0\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.634492 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zk5c\" (UniqueName: \"kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c\") pod \"1d55647b-8db7-4352-aba2-c1bc67c744e0\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.634548 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key\") pod \"1d55647b-8db7-4352-aba2-c1bc67c744e0\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.634594 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle\") pod \"1d55647b-8db7-4352-aba2-c1bc67c744e0\" (UID: \"1d55647b-8db7-4352-aba2-c1bc67c744e0\") " Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.640214 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "1d55647b-8db7-4352-aba2-c1bc67c744e0" (UID: "1d55647b-8db7-4352-aba2-c1bc67c744e0"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.640953 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c" (OuterVolumeSpecName: "kube-api-access-9zk5c") pod "1d55647b-8db7-4352-aba2-c1bc67c744e0" (UID: "1d55647b-8db7-4352-aba2-c1bc67c744e0"). InnerVolumeSpecName "kube-api-access-9zk5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.675478 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1d55647b-8db7-4352-aba2-c1bc67c744e0" (UID: "1d55647b-8db7-4352-aba2-c1bc67c744e0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.676120 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory" (OuterVolumeSpecName: "inventory") pod "1d55647b-8db7-4352-aba2-c1bc67c744e0" (UID: "1d55647b-8db7-4352-aba2-c1bc67c744e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.736896 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.737208 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zk5c\" (UniqueName: \"kubernetes.io/projected/1d55647b-8db7-4352-aba2-c1bc67c744e0-kube-api-access-9zk5c\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.737219 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:09 crc kubenswrapper[4988]: I1123 08:29:09.737227 4988 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d55647b-8db7-4352-aba2-c1bc67c744e0-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:10 crc kubenswrapper[4988]: I1123 08:29:10.217318 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" event={"ID":"1d55647b-8db7-4352-aba2-c1bc67c744e0","Type":"ContainerDied","Data":"6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616"} Nov 23 08:29:10 crc kubenswrapper[4988]: I1123 08:29:10.217379 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bfc62de4b1f2d39090c8571fd9b860527086ca56cfb7a991d8d758c0c623616" Nov 23 08:29:10 crc kubenswrapper[4988]: I1123 08:29:10.217403 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.300056 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp"] Nov 23 08:29:21 crc kubenswrapper[4988]: E1123 08:29:21.301222 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="init" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.301240 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="init" Nov 23 08:29:21 crc kubenswrapper[4988]: E1123 08:29:21.301282 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="dnsmasq-dns" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.301293 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="dnsmasq-dns" Nov 23 08:29:21 crc kubenswrapper[4988]: E1123 08:29:21.301341 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d55647b-8db7-4352-aba2-c1bc67c744e0" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.301354 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d55647b-8db7-4352-aba2-c1bc67c744e0" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.301616 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="025edc07-0e90-48bf-957e-8842578e17d1" containerName="dnsmasq-dns" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.301654 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d55647b-8db7-4352-aba2-c1bc67c744e0" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.302626 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.305506 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.305900 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.305937 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.307907 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.322243 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp"] Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.420903 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.421284 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srb76\" (UniqueName: \"kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.421716 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.421778 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.523508 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.523580 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.523719 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.523763 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srb76\" (UniqueName: \"kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.531124 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.532467 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.546399 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.552786 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srb76\" (UniqueName: \"kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:21 crc kubenswrapper[4988]: I1123 08:29:21.638801 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:29:22 crc kubenswrapper[4988]: I1123 08:29:22.234024 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp"] Nov 23 08:29:22 crc kubenswrapper[4988]: I1123 08:29:22.359042 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" event={"ID":"7d3fd979-5f3d-43cf-a771-80668ab96673","Type":"ContainerStarted","Data":"1afc1f7c79a84d56d62b96fbc1dca8d0df92c3f42f1e9d891774636c1a91de75"} Nov 23 08:29:23 crc kubenswrapper[4988]: I1123 08:29:23.373107 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" event={"ID":"7d3fd979-5f3d-43cf-a771-80668ab96673","Type":"ContainerStarted","Data":"8ca54edca6b9d93942e34490b979ab121890d93cda02899dc364d06668704483"} Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.048360 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" podStartSLOduration=14.577370537 podStartE2EDuration="15.048339138s" podCreationTimestamp="2025-11-23 08:29:21 +0000 UTC" firstStartedPulling="2025-11-23 08:29:22.24863116 +0000 UTC m=+6214.557143923" lastFinishedPulling="2025-11-23 08:29:22.719599761 +0000 UTC m=+6215.028112524" observedRunningTime="2025-11-23 08:29:23.405313952 +0000 UTC m=+6215.713826725" watchObservedRunningTime="2025-11-23 08:29:36.048339138 +0000 UTC m=+6228.356851901" Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.051054 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6vdlx"] Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.062476 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a6bc-account-create-vqqr8"] Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.070824 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6vdlx"] Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.078360 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a6bc-account-create-vqqr8"] Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.509413 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b61d575-20f6-4426-853e-261215c93765" path="/var/lib/kubelet/pods/1b61d575-20f6-4426-853e-261215c93765/volumes" Nov 23 08:29:36 crc kubenswrapper[4988]: I1123 08:29:36.510546 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949149d5-0dd5-4658-b0c5-44bca1bbe862" path="/var/lib/kubelet/pods/949149d5-0dd5-4658-b0c5-44bca1bbe862/volumes" Nov 23 08:29:47 crc kubenswrapper[4988]: I1123 08:29:47.051185 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rdkz6"] Nov 23 08:29:47 crc kubenswrapper[4988]: I1123 08:29:47.064810 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rdkz6"] Nov 23 08:29:48 crc kubenswrapper[4988]: I1123 08:29:48.519026 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180e5406-f841-4339-85e9-115029643be8" path="/var/lib/kubelet/pods/180e5406-f841-4339-85e9-115029643be8/volumes" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.165639 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c"] Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.169490 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.172369 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.173031 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.191711 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c"] Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.291981 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs84j\" (UniqueName: \"kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.292046 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.292123 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.393692 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs84j\" (UniqueName: \"kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.393970 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.394018 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.394727 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.404743 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.421426 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs84j\" (UniqueName: \"kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j\") pod \"collect-profiles-29398110-8cw8c\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:00 crc kubenswrapper[4988]: I1123 08:30:00.497604 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.002269 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c"] Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.237234 4988 scope.go:117] "RemoveContainer" containerID="77453be31c1407da1ffb642dcd592eb9cc98896d494ea6eeb2a31cfe3b54e252" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.261056 4988 scope.go:117] "RemoveContainer" containerID="902f7c60bc9e4554a976ff61fb1680e1b5fab153402f3156b5acb4f9cdb6f9d9" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.279803 4988 scope.go:117] "RemoveContainer" containerID="e15b0f60fbc0d1914471bc0ce072e8d960e7a7d854d82ffff1f888a1595b5d95" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.312112 4988 scope.go:117] "RemoveContainer" containerID="f624f50fe15e5293fdfd37a83767d6b97602e59e4b8515727fd5b417be45ac93" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.334812 4988 scope.go:117] "RemoveContainer" containerID="2d15b4cf05610d2b6be1f64a054d2ae820784520e82af142b28122c45c28c00a" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.368235 4988 scope.go:117] "RemoveContainer" containerID="cee4eeec4e17d6158e4acd3052ef7932456dc388c11ab92de0d2b67d2e8a271f" Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.821272 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" event={"ID":"5d2f5e95-acd4-4aff-9ce0-28b5063654fb","Type":"ContainerStarted","Data":"c310cc7b47f3c608230ac453ffa2dcd903f93d1de8bb8fdf139cd804f86df29a"} Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.821312 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" event={"ID":"5d2f5e95-acd4-4aff-9ce0-28b5063654fb","Type":"ContainerStarted","Data":"22bfb45fdc176292d9b75927f2abec6c631d11793eda41eba21fc4c997a12164"} Nov 23 08:30:01 crc kubenswrapper[4988]: I1123 08:30:01.842295 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" podStartSLOduration=1.842276714 podStartE2EDuration="1.842276714s" podCreationTimestamp="2025-11-23 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:30:01.841105105 +0000 UTC m=+6254.149617888" watchObservedRunningTime="2025-11-23 08:30:01.842276714 +0000 UTC m=+6254.150789477" Nov 23 08:30:02 crc kubenswrapper[4988]: I1123 08:30:02.835126 4988 generic.go:334] "Generic (PLEG): container finished" podID="5d2f5e95-acd4-4aff-9ce0-28b5063654fb" containerID="c310cc7b47f3c608230ac453ffa2dcd903f93d1de8bb8fdf139cd804f86df29a" exitCode=0 Nov 23 08:30:02 crc kubenswrapper[4988]: I1123 08:30:02.835247 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" event={"ID":"5d2f5e95-acd4-4aff-9ce0-28b5063654fb","Type":"ContainerDied","Data":"c310cc7b47f3c608230ac453ffa2dcd903f93d1de8bb8fdf139cd804f86df29a"} Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.229209 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.278937 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume\") pod \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.279039 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume\") pod \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.280034 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "5d2f5e95-acd4-4aff-9ce0-28b5063654fb" (UID: "5d2f5e95-acd4-4aff-9ce0-28b5063654fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.280128 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs84j\" (UniqueName: \"kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j\") pod \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\" (UID: \"5d2f5e95-acd4-4aff-9ce0-28b5063654fb\") " Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.281260 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.284524 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5d2f5e95-acd4-4aff-9ce0-28b5063654fb" (UID: "5d2f5e95-acd4-4aff-9ce0-28b5063654fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.285152 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j" (OuterVolumeSpecName: "kube-api-access-cs84j") pod "5d2f5e95-acd4-4aff-9ce0-28b5063654fb" (UID: "5d2f5e95-acd4-4aff-9ce0-28b5063654fb"). InnerVolumeSpecName "kube-api-access-cs84j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.384943 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs84j\" (UniqueName: \"kubernetes.io/projected/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-kube-api-access-cs84j\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.385294 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5d2f5e95-acd4-4aff-9ce0-28b5063654fb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.861070 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" event={"ID":"5d2f5e95-acd4-4aff-9ce0-28b5063654fb","Type":"ContainerDied","Data":"22bfb45fdc176292d9b75927f2abec6c631d11793eda41eba21fc4c997a12164"} Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.861412 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22bfb45fdc176292d9b75927f2abec6c631d11793eda41eba21fc4c997a12164" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.861267 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c" Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.914725 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd"] Nov 23 08:30:04 crc kubenswrapper[4988]: I1123 08:30:04.924513 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-sljhd"] Nov 23 08:30:06 crc kubenswrapper[4988]: I1123 08:30:06.514995 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa3a745-9507-4b82-80cf-1f42a0c39e84" path="/var/lib/kubelet/pods/0fa3a745-9507-4b82-80cf-1f42a0c39e84/volumes" Nov 23 08:30:44 crc kubenswrapper[4988]: I1123 08:30:44.081380 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-65xgq"] Nov 23 08:30:44 crc kubenswrapper[4988]: I1123 08:30:44.090935 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-65xgq"] Nov 23 08:30:44 crc kubenswrapper[4988]: I1123 08:30:44.515357 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883a08ee-d762-4066-86b2-e7274fe3e003" path="/var/lib/kubelet/pods/883a08ee-d762-4066-86b2-e7274fe3e003/volumes" Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.039449 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-drb5f"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.053373 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dhk9r"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.072562 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0198-account-create-bh7n5"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.083282 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-drb5f"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.096409 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dhk9r"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.107558 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6104-account-create-rfblv"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.118629 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7ef9-account-create-wkglm"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.129840 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6104-account-create-rfblv"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.141397 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0198-account-create-bh7n5"] Nov 23 08:30:45 crc kubenswrapper[4988]: I1123 08:30:45.157495 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7ef9-account-create-wkglm"] Nov 23 08:30:46 crc kubenswrapper[4988]: I1123 08:30:46.518235 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1000af15-4604-46b4-a1a4-fc06decac5b7" path="/var/lib/kubelet/pods/1000af15-4604-46b4-a1a4-fc06decac5b7/volumes" Nov 23 08:30:46 crc kubenswrapper[4988]: I1123 08:30:46.519996 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678fca6f-8d7a-41d6-b3e5-7bc209192c12" path="/var/lib/kubelet/pods/678fca6f-8d7a-41d6-b3e5-7bc209192c12/volumes" Nov 23 08:30:46 crc kubenswrapper[4988]: I1123 08:30:46.521154 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f22cd7-cd91-4eb0-8ea6-06e862540405" path="/var/lib/kubelet/pods/93f22cd7-cd91-4eb0-8ea6-06e862540405/volumes" Nov 23 08:30:46 crc kubenswrapper[4988]: I1123 08:30:46.522445 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="befb0fb0-05b2-4ca5-9613-658701028051" path="/var/lib/kubelet/pods/befb0fb0-05b2-4ca5-9613-658701028051/volumes" Nov 23 08:30:46 crc kubenswrapper[4988]: I1123 08:30:46.525287 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd53de0e-1cb3-4d38-a347-2865a4d88cd8" path="/var/lib/kubelet/pods/cd53de0e-1cb3-4d38-a347-2865a4d88cd8/volumes" Nov 23 08:30:51 crc kubenswrapper[4988]: I1123 08:30:51.672795 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:30:51 crc kubenswrapper[4988]: I1123 08:30:51.673544 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.653607 4988 scope.go:117] "RemoveContainer" containerID="a673339287ab2ce33ccd4afc760d1d3804d3794a111e5b5dc0c16ef28c93bdb7" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.691365 4988 scope.go:117] "RemoveContainer" containerID="01517096719b60a4207e953ca66513dbf855af601a377d742b1c0fcdefb72d6d" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.776257 4988 scope.go:117] "RemoveContainer" containerID="7fd84f13d44381eb72ac203eb3a37f48bcfd2e36e4d03bca00e1b1dcb55b2606" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.818174 4988 scope.go:117] "RemoveContainer" containerID="32f8bc967e3acab5a015027aef815378d7e558241c30a5d86dc3d589b7d89562" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.862672 4988 scope.go:117] "RemoveContainer" containerID="efb6a246d5dd027c159803f0f4421eaa8e358936b80f150d44e9f5887fd175b8" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.912946 4988 scope.go:117] "RemoveContainer" containerID="b04a6f90c0d7b7cea63a84db825f4d9b0e100008469dc9b6df182bddfef1d4a2" Nov 23 08:31:01 crc kubenswrapper[4988]: I1123 08:31:01.972286 4988 scope.go:117] "RemoveContainer" containerID="de13ced4d5a1ad77a551d27ce5cdfa3c5981e7903714790bd188451546d3b5d5" Nov 23 08:31:05 crc kubenswrapper[4988]: I1123 08:31:05.040626 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t6mmp"] Nov 23 08:31:05 crc kubenswrapper[4988]: I1123 08:31:05.053484 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t6mmp"] Nov 23 08:31:06 crc kubenswrapper[4988]: I1123 08:31:06.515380 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cebd48-64f9-4237-a919-90259adcd22f" path="/var/lib/kubelet/pods/03cebd48-64f9-4237-a919-90259adcd22f/volumes" Nov 23 08:31:21 crc kubenswrapper[4988]: I1123 08:31:21.672979 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:31:21 crc kubenswrapper[4988]: I1123 08:31:21.673667 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:31:23 crc kubenswrapper[4988]: I1123 08:31:23.056453 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7tv69"] Nov 23 08:31:23 crc kubenswrapper[4988]: I1123 08:31:23.067900 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7tv69"] Nov 23 08:31:24 crc kubenswrapper[4988]: I1123 08:31:24.520763 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f641da98-25f1-4abc-80bc-0374d3e68666" path="/var/lib/kubelet/pods/f641da98-25f1-4abc-80bc-0374d3e68666/volumes" Nov 23 08:31:25 crc kubenswrapper[4988]: I1123 08:31:25.030641 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zj4m"] Nov 23 08:31:25 crc kubenswrapper[4988]: I1123 08:31:25.038167 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4zj4m"] Nov 23 08:31:26 crc kubenswrapper[4988]: I1123 08:31:26.533678 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4d596f-639f-47c4-8785-6c96ce284f50" path="/var/lib/kubelet/pods/8d4d596f-639f-47c4-8785-6c96ce284f50/volumes" Nov 23 08:31:51 crc kubenswrapper[4988]: I1123 08:31:51.672167 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:31:51 crc kubenswrapper[4988]: I1123 08:31:51.672872 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:31:51 crc kubenswrapper[4988]: I1123 08:31:51.672924 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:31:51 crc kubenswrapper[4988]: I1123 08:31:51.673847 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:31:51 crc kubenswrapper[4988]: I1123 08:31:51.673916 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23" gracePeriod=600 Nov 23 08:31:52 crc kubenswrapper[4988]: I1123 08:31:52.116993 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23" exitCode=0 Nov 23 08:31:52 crc kubenswrapper[4988]: I1123 08:31:52.117063 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23"} Nov 23 08:31:52 crc kubenswrapper[4988]: I1123 08:31:52.117394 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f"} Nov 23 08:31:52 crc kubenswrapper[4988]: I1123 08:31:52.117420 4988 scope.go:117] "RemoveContainer" containerID="20cbdd4bec6fe7ccda65a74a0fd0765806682c2141b3cb864bddcdebfa166f84" Nov 23 08:32:02 crc kubenswrapper[4988]: I1123 08:32:02.124873 4988 scope.go:117] "RemoveContainer" containerID="c1a48b389aba45bf08bff938247489970752da06ddcbed603b4f4a9b7e936118" Nov 23 08:32:02 crc kubenswrapper[4988]: I1123 08:32:02.179408 4988 scope.go:117] "RemoveContainer" containerID="343b1ea4d5448ca08ba9bbc4295c801b79aa4602a63354b910695fa52618cd20" Nov 23 08:32:02 crc kubenswrapper[4988]: I1123 08:32:02.214976 4988 scope.go:117] "RemoveContainer" containerID="1361d33e575f9506111a31effd65ee2cbd71e6fc796ba6b580b67b61c04f3ff7" Nov 23 08:32:02 crc kubenswrapper[4988]: I1123 08:32:02.351458 4988 scope.go:117] "RemoveContainer" containerID="3cac82a215c408489b0655ecdc15ca7f35c2f255d9afe789c31f12ab112c07fa" Nov 23 08:32:02 crc kubenswrapper[4988]: I1123 08:32:02.397814 4988 scope.go:117] "RemoveContainer" containerID="af5b22c0d7f5479f42788a85e39297629e7de82fc3830c6e3a4b00b383c8a021" Nov 23 08:32:12 crc kubenswrapper[4988]: I1123 08:32:12.051502 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zrmnq"] Nov 23 08:32:12 crc kubenswrapper[4988]: I1123 08:32:12.063557 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zrmnq"] Nov 23 08:32:12 crc kubenswrapper[4988]: I1123 08:32:12.510986 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b3ee4a-8bc5-4cf2-a345-c47267451d60" path="/var/lib/kubelet/pods/82b3ee4a-8bc5-4cf2-a345-c47267451d60/volumes" Nov 23 08:33:02 crc kubenswrapper[4988]: I1123 08:33:02.567881 4988 scope.go:117] "RemoveContainer" containerID="e125af4531b7c5e66435f3cac544e62e8a97e4f2d72067750ef3fdab19958b22" Nov 23 08:34:21 crc kubenswrapper[4988]: I1123 08:34:21.672812 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:34:21 crc kubenswrapper[4988]: I1123 08:34:21.673492 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.071031 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-xhcvh"] Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.082168 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-e035-account-create-rfmkd"] Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.093827 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-xhcvh"] Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.104310 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-e035-account-create-rfmkd"] Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.517438 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864ed855-8f69-4030-bf86-653c71905588" path="/var/lib/kubelet/pods/864ed855-8f69-4030-bf86-653c71905588/volumes" Nov 23 08:34:50 crc kubenswrapper[4988]: I1123 08:34:50.519154 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f42972-e4b0-4647-ad19-b0a55d64ba09" path="/var/lib/kubelet/pods/91f42972-e4b0-4647-ad19-b0a55d64ba09/volumes" Nov 23 08:34:51 crc kubenswrapper[4988]: I1123 08:34:51.672746 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:34:51 crc kubenswrapper[4988]: I1123 08:34:51.673266 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.686589 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:34:55 crc kubenswrapper[4988]: E1123 08:34:55.687564 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2f5e95-acd4-4aff-9ce0-28b5063654fb" containerName="collect-profiles" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.687608 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2f5e95-acd4-4aff-9ce0-28b5063654fb" containerName="collect-profiles" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.687993 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2f5e95-acd4-4aff-9ce0-28b5063654fb" containerName="collect-profiles" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.690572 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.707432 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.749528 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.749869 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.750025 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52fj\" (UniqueName: \"kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.851864 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.851979 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52fj\" (UniqueName: \"kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.852065 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.852418 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.852533 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:55 crc kubenswrapper[4988]: I1123 08:34:55.872456 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52fj\" (UniqueName: \"kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj\") pod \"certified-operators-8ssrc\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:56 crc kubenswrapper[4988]: I1123 08:34:56.020263 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:34:56 crc kubenswrapper[4988]: I1123 08:34:56.568740 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:34:57 crc kubenswrapper[4988]: I1123 08:34:57.405508 4988 generic.go:334] "Generic (PLEG): container finished" podID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerID="5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828" exitCode=0 Nov 23 08:34:57 crc kubenswrapper[4988]: I1123 08:34:57.405594 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerDied","Data":"5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828"} Nov 23 08:34:57 crc kubenswrapper[4988]: I1123 08:34:57.405848 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerStarted","Data":"3d6624f7f72844f3f9ead35d68f8cc365d57b62de7e37b3721b26f08756fae8c"} Nov 23 08:34:57 crc kubenswrapper[4988]: I1123 08:34:57.409181 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:34:58 crc kubenswrapper[4988]: I1123 08:34:58.420182 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerStarted","Data":"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e"} Nov 23 08:35:00 crc kubenswrapper[4988]: I1123 08:35:00.444691 4988 generic.go:334] "Generic (PLEG): container finished" podID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerID="4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e" exitCode=0 Nov 23 08:35:00 crc kubenswrapper[4988]: I1123 08:35:00.445163 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerDied","Data":"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e"} Nov 23 08:35:01 crc kubenswrapper[4988]: I1123 08:35:01.459442 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerStarted","Data":"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791"} Nov 23 08:35:01 crc kubenswrapper[4988]: I1123 08:35:01.503552 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8ssrc" podStartSLOduration=2.987858601 podStartE2EDuration="6.503523325s" podCreationTimestamp="2025-11-23 08:34:55 +0000 UTC" firstStartedPulling="2025-11-23 08:34:57.408894429 +0000 UTC m=+6549.717407192" lastFinishedPulling="2025-11-23 08:35:00.924559143 +0000 UTC m=+6553.233071916" observedRunningTime="2025-11-23 08:35:01.48545097 +0000 UTC m=+6553.793963763" watchObservedRunningTime="2025-11-23 08:35:01.503523325 +0000 UTC m=+6553.812036128" Nov 23 08:35:02 crc kubenswrapper[4988]: I1123 08:35:02.734116 4988 scope.go:117] "RemoveContainer" containerID="b5050ceca04f5f7fef65fa9b96b9271d27de1e7c5dbd13f76d0bb41f87006f06" Nov 23 08:35:02 crc kubenswrapper[4988]: I1123 08:35:02.789321 4988 scope.go:117] "RemoveContainer" containerID="7a53d4b9795e4ee665ab772395df9292ec2b8b9d69e0a021bf847dad50ecd747" Nov 23 08:35:04 crc kubenswrapper[4988]: I1123 08:35:04.047383 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qqk7n"] Nov 23 08:35:04 crc kubenswrapper[4988]: I1123 08:35:04.073786 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qqk7n"] Nov 23 08:35:04 crc kubenswrapper[4988]: I1123 08:35:04.530858 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939696e4-9c22-448f-8269-8d57c545640e" path="/var/lib/kubelet/pods/939696e4-9c22-448f-8269-8d57c545640e/volumes" Nov 23 08:35:06 crc kubenswrapper[4988]: I1123 08:35:06.020442 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:06 crc kubenswrapper[4988]: I1123 08:35:06.020487 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:06 crc kubenswrapper[4988]: I1123 08:35:06.077811 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:06 crc kubenswrapper[4988]: I1123 08:35:06.572816 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:06 crc kubenswrapper[4988]: I1123 08:35:06.629475 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:35:08 crc kubenswrapper[4988]: I1123 08:35:08.527338 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8ssrc" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="registry-server" containerID="cri-o://aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791" gracePeriod=2 Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.091927 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.204318 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52fj\" (UniqueName: \"kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj\") pod \"349f1dad-e940-4b1d-834d-054dd5c41bcd\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.204590 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities\") pod \"349f1dad-e940-4b1d-834d-054dd5c41bcd\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.204855 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content\") pod \"349f1dad-e940-4b1d-834d-054dd5c41bcd\" (UID: \"349f1dad-e940-4b1d-834d-054dd5c41bcd\") " Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.205215 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities" (OuterVolumeSpecName: "utilities") pod "349f1dad-e940-4b1d-834d-054dd5c41bcd" (UID: "349f1dad-e940-4b1d-834d-054dd5c41bcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.205442 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.210422 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj" (OuterVolumeSpecName: "kube-api-access-f52fj") pod "349f1dad-e940-4b1d-834d-054dd5c41bcd" (UID: "349f1dad-e940-4b1d-834d-054dd5c41bcd"). InnerVolumeSpecName "kube-api-access-f52fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.248278 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "349f1dad-e940-4b1d-834d-054dd5c41bcd" (UID: "349f1dad-e940-4b1d-834d-054dd5c41bcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.307286 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/349f1dad-e940-4b1d-834d-054dd5c41bcd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.307324 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52fj\" (UniqueName: \"kubernetes.io/projected/349f1dad-e940-4b1d-834d-054dd5c41bcd-kube-api-access-f52fj\") on node \"crc\" DevicePath \"\"" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.540276 4988 generic.go:334] "Generic (PLEG): container finished" podID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerID="aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791" exitCode=0 Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.540348 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerDied","Data":"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791"} Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.540401 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ssrc" event={"ID":"349f1dad-e940-4b1d-834d-054dd5c41bcd","Type":"ContainerDied","Data":"3d6624f7f72844f3f9ead35d68f8cc365d57b62de7e37b3721b26f08756fae8c"} Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.540444 4988 scope.go:117] "RemoveContainer" containerID="aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.540459 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ssrc" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.562682 4988 scope.go:117] "RemoveContainer" containerID="4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.597241 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.598511 4988 scope.go:117] "RemoveContainer" containerID="5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.615723 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8ssrc"] Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.664055 4988 scope.go:117] "RemoveContainer" containerID="aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791" Nov 23 08:35:09 crc kubenswrapper[4988]: E1123 08:35:09.665855 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791\": container with ID starting with aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791 not found: ID does not exist" containerID="aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.665887 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791"} err="failed to get container status \"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791\": rpc error: code = NotFound desc = could not find container \"aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791\": container with ID starting with aaf0bce9caa97e9eae111f9a3c38699584913629a5f8b7aaa92fcdb1ffe14791 not found: ID does not exist" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.665911 4988 scope.go:117] "RemoveContainer" containerID="4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e" Nov 23 08:35:09 crc kubenswrapper[4988]: E1123 08:35:09.666593 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e\": container with ID starting with 4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e not found: ID does not exist" containerID="4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.666624 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e"} err="failed to get container status \"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e\": rpc error: code = NotFound desc = could not find container \"4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e\": container with ID starting with 4b96d07f9f2587860ae3728db9047bb15fdcb83826b4dc644679e9351d84403e not found: ID does not exist" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.666766 4988 scope.go:117] "RemoveContainer" containerID="5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828" Nov 23 08:35:09 crc kubenswrapper[4988]: E1123 08:35:09.667167 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828\": container with ID starting with 5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828 not found: ID does not exist" containerID="5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828" Nov 23 08:35:09 crc kubenswrapper[4988]: I1123 08:35:09.667263 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828"} err="failed to get container status \"5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828\": rpc error: code = NotFound desc = could not find container \"5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828\": container with ID starting with 5117b772b236de4767f090b744d6f965684b47975b9bb0da34b02e3241b45828 not found: ID does not exist" Nov 23 08:35:10 crc kubenswrapper[4988]: I1123 08:35:10.510604 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" path="/var/lib/kubelet/pods/349f1dad-e940-4b1d-834d-054dd5c41bcd/volumes" Nov 23 08:35:21 crc kubenswrapper[4988]: I1123 08:35:21.672090 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:35:21 crc kubenswrapper[4988]: I1123 08:35:21.672915 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:35:21 crc kubenswrapper[4988]: I1123 08:35:21.672997 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:35:21 crc kubenswrapper[4988]: I1123 08:35:21.674358 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:35:21 crc kubenswrapper[4988]: I1123 08:35:21.674521 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" gracePeriod=600 Nov 23 08:35:21 crc kubenswrapper[4988]: E1123 08:35:21.809307 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:35:22 crc kubenswrapper[4988]: I1123 08:35:22.697169 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" exitCode=0 Nov 23 08:35:22 crc kubenswrapper[4988]: I1123 08:35:22.697934 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f"} Nov 23 08:35:22 crc kubenswrapper[4988]: I1123 08:35:22.698074 4988 scope.go:117] "RemoveContainer" containerID="e9ca61ab9057b2b0d27b5433a43020e51548a4e9ce3df4a2a0d90fd878028b23" Nov 23 08:35:22 crc kubenswrapper[4988]: I1123 08:35:22.702212 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:35:22 crc kubenswrapper[4988]: E1123 08:35:22.702601 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:35:33 crc kubenswrapper[4988]: I1123 08:35:33.496211 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:35:33 crc kubenswrapper[4988]: E1123 08:35:33.497164 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:35:45 crc kubenswrapper[4988]: I1123 08:35:45.496250 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:35:45 crc kubenswrapper[4988]: E1123 08:35:45.497045 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.399227 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:35:59 crc kubenswrapper[4988]: E1123 08:35:59.401697 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="registry-server" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.401794 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="registry-server" Nov 23 08:35:59 crc kubenswrapper[4988]: E1123 08:35:59.401869 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="extract-content" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.401925 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="extract-content" Nov 23 08:35:59 crc kubenswrapper[4988]: E1123 08:35:59.402012 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="extract-utilities" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.402081 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="extract-utilities" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.402413 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="349f1dad-e940-4b1d-834d-054dd5c41bcd" containerName="registry-server" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.406319 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.425789 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.489441 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.489501 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.489528 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2cp\" (UniqueName: \"kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.499362 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:35:59 crc kubenswrapper[4988]: E1123 08:35:59.499688 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.591241 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.591306 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.591335 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt2cp\" (UniqueName: \"kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.592326 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.592595 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.616673 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt2cp\" (UniqueName: \"kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp\") pod \"redhat-operators-96zxc\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:35:59 crc kubenswrapper[4988]: I1123 08:35:59.737667 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:00 crc kubenswrapper[4988]: I1123 08:36:00.287742 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.120352 4988 generic.go:334] "Generic (PLEG): container finished" podID="7fd4d09f-742e-4867-b801-1604620d3365" containerID="1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805" exitCode=0 Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.120453 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerDied","Data":"1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805"} Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.120686 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerStarted","Data":"79abf02115ecef82d3c0b0d437c0df239db0f6bc1ca71dd707bdeb968541fcb7"} Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.792404 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.797319 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.836673 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.836747 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lnm\" (UniqueName: \"kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.836817 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.853770 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.939323 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.939491 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.939520 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lnm\" (UniqueName: \"kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.940056 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.940071 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:01 crc kubenswrapper[4988]: I1123 08:36:01.974268 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lnm\" (UniqueName: \"kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm\") pod \"redhat-marketplace-wnbn9\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:02 crc kubenswrapper[4988]: I1123 08:36:02.135421 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerStarted","Data":"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b"} Nov 23 08:36:02 crc kubenswrapper[4988]: I1123 08:36:02.139654 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:02 crc kubenswrapper[4988]: I1123 08:36:02.655254 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:02 crc kubenswrapper[4988]: W1123 08:36:02.661527 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod998ff700_a63b_4178_b456_57ad8e5f36e5.slice/crio-ba9560f8949b12d97c2d156c672d4897dc9f7ad2a49444dfbe109ffe7619628f WatchSource:0}: Error finding container ba9560f8949b12d97c2d156c672d4897dc9f7ad2a49444dfbe109ffe7619628f: Status 404 returned error can't find the container with id ba9560f8949b12d97c2d156c672d4897dc9f7ad2a49444dfbe109ffe7619628f Nov 23 08:36:02 crc kubenswrapper[4988]: I1123 08:36:02.940724 4988 scope.go:117] "RemoveContainer" containerID="354d2c5c293b36ce7eb64e72f7c03d657f326243bb57ea86ad54c53973b64c7b" Nov 23 08:36:03 crc kubenswrapper[4988]: I1123 08:36:03.146150 4988 generic.go:334] "Generic (PLEG): container finished" podID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerID="12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148" exitCode=0 Nov 23 08:36:03 crc kubenswrapper[4988]: I1123 08:36:03.146412 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerDied","Data":"12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148"} Nov 23 08:36:03 crc kubenswrapper[4988]: I1123 08:36:03.146456 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerStarted","Data":"ba9560f8949b12d97c2d156c672d4897dc9f7ad2a49444dfbe109ffe7619628f"} Nov 23 08:36:05 crc kubenswrapper[4988]: I1123 08:36:05.165104 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerStarted","Data":"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30"} Nov 23 08:36:06 crc kubenswrapper[4988]: I1123 08:36:06.180941 4988 generic.go:334] "Generic (PLEG): container finished" podID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerID="5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30" exitCode=0 Nov 23 08:36:06 crc kubenswrapper[4988]: I1123 08:36:06.181009 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerDied","Data":"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30"} Nov 23 08:36:07 crc kubenswrapper[4988]: I1123 08:36:07.194553 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerStarted","Data":"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7"} Nov 23 08:36:07 crc kubenswrapper[4988]: I1123 08:36:07.226237 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wnbn9" podStartSLOduration=2.697754188 podStartE2EDuration="6.226218457s" podCreationTimestamp="2025-11-23 08:36:01 +0000 UTC" firstStartedPulling="2025-11-23 08:36:03.147684438 +0000 UTC m=+6615.456197201" lastFinishedPulling="2025-11-23 08:36:06.676148707 +0000 UTC m=+6618.984661470" observedRunningTime="2025-11-23 08:36:07.221410798 +0000 UTC m=+6619.529923581" watchObservedRunningTime="2025-11-23 08:36:07.226218457 +0000 UTC m=+6619.534731220" Nov 23 08:36:08 crc kubenswrapper[4988]: I1123 08:36:08.210464 4988 generic.go:334] "Generic (PLEG): container finished" podID="7fd4d09f-742e-4867-b801-1604620d3365" containerID="b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b" exitCode=0 Nov 23 08:36:08 crc kubenswrapper[4988]: I1123 08:36:08.210516 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerDied","Data":"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b"} Nov 23 08:36:09 crc kubenswrapper[4988]: I1123 08:36:09.221019 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerStarted","Data":"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5"} Nov 23 08:36:09 crc kubenswrapper[4988]: I1123 08:36:09.247963 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-96zxc" podStartSLOduration=2.740298121 podStartE2EDuration="10.247944575s" podCreationTimestamp="2025-11-23 08:35:59 +0000 UTC" firstStartedPulling="2025-11-23 08:36:01.123326656 +0000 UTC m=+6613.431839419" lastFinishedPulling="2025-11-23 08:36:08.63097311 +0000 UTC m=+6620.939485873" observedRunningTime="2025-11-23 08:36:09.237604391 +0000 UTC m=+6621.546117154" watchObservedRunningTime="2025-11-23 08:36:09.247944575 +0000 UTC m=+6621.556457338" Nov 23 08:36:09 crc kubenswrapper[4988]: I1123 08:36:09.738336 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:09 crc kubenswrapper[4988]: I1123 08:36:09.738406 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:10 crc kubenswrapper[4988]: I1123 08:36:10.804147 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96zxc" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="registry-server" probeResult="failure" output=< Nov 23 08:36:10 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:36:10 crc kubenswrapper[4988]: > Nov 23 08:36:11 crc kubenswrapper[4988]: I1123 08:36:11.496848 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:36:11 crc kubenswrapper[4988]: E1123 08:36:11.497378 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:36:12 crc kubenswrapper[4988]: I1123 08:36:12.140630 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:12 crc kubenswrapper[4988]: I1123 08:36:12.141398 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:12 crc kubenswrapper[4988]: I1123 08:36:12.215228 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:12 crc kubenswrapper[4988]: I1123 08:36:12.314269 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:12 crc kubenswrapper[4988]: I1123 08:36:12.572504 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.270564 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wnbn9" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="registry-server" containerID="cri-o://c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7" gracePeriod=2 Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.737835 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.904011 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities\") pod \"998ff700-a63b-4178-b456-57ad8e5f36e5\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.904286 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9lnm\" (UniqueName: \"kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm\") pod \"998ff700-a63b-4178-b456-57ad8e5f36e5\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.904361 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content\") pod \"998ff700-a63b-4178-b456-57ad8e5f36e5\" (UID: \"998ff700-a63b-4178-b456-57ad8e5f36e5\") " Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.905361 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities" (OuterVolumeSpecName: "utilities") pod "998ff700-a63b-4178-b456-57ad8e5f36e5" (UID: "998ff700-a63b-4178-b456-57ad8e5f36e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.913687 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm" (OuterVolumeSpecName: "kube-api-access-s9lnm") pod "998ff700-a63b-4178-b456-57ad8e5f36e5" (UID: "998ff700-a63b-4178-b456-57ad8e5f36e5"). InnerVolumeSpecName "kube-api-access-s9lnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:36:14 crc kubenswrapper[4988]: I1123 08:36:14.956222 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "998ff700-a63b-4178-b456-57ad8e5f36e5" (UID: "998ff700-a63b-4178-b456-57ad8e5f36e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.010773 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.010826 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998ff700-a63b-4178-b456-57ad8e5f36e5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.010840 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9lnm\" (UniqueName: \"kubernetes.io/projected/998ff700-a63b-4178-b456-57ad8e5f36e5-kube-api-access-s9lnm\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.281653 4988 generic.go:334] "Generic (PLEG): container finished" podID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerID="c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7" exitCode=0 Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.281723 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnbn9" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.281729 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerDied","Data":"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7"} Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.282080 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnbn9" event={"ID":"998ff700-a63b-4178-b456-57ad8e5f36e5","Type":"ContainerDied","Data":"ba9560f8949b12d97c2d156c672d4897dc9f7ad2a49444dfbe109ffe7619628f"} Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.282105 4988 scope.go:117] "RemoveContainer" containerID="c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.303759 4988 scope.go:117] "RemoveContainer" containerID="5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.317811 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.325934 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnbn9"] Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.340031 4988 scope.go:117] "RemoveContainer" containerID="12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.382407 4988 scope.go:117] "RemoveContainer" containerID="c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7" Nov 23 08:36:15 crc kubenswrapper[4988]: E1123 08:36:15.382841 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7\": container with ID starting with c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7 not found: ID does not exist" containerID="c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.382887 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7"} err="failed to get container status \"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7\": rpc error: code = NotFound desc = could not find container \"c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7\": container with ID starting with c6ec94424510f88f18295bc7c43d1d4ee1afe172a50e37f84721e691c8ccdaa7 not found: ID does not exist" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.382916 4988 scope.go:117] "RemoveContainer" containerID="5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30" Nov 23 08:36:15 crc kubenswrapper[4988]: E1123 08:36:15.383224 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30\": container with ID starting with 5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30 not found: ID does not exist" containerID="5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.383257 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30"} err="failed to get container status \"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30\": rpc error: code = NotFound desc = could not find container \"5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30\": container with ID starting with 5760636a9fac004267b54752f09a8c73107b08f53f708dc9a3abe695bbc50e30 not found: ID does not exist" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.383279 4988 scope.go:117] "RemoveContainer" containerID="12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148" Nov 23 08:36:15 crc kubenswrapper[4988]: E1123 08:36:15.383599 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148\": container with ID starting with 12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148 not found: ID does not exist" containerID="12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148" Nov 23 08:36:15 crc kubenswrapper[4988]: I1123 08:36:15.383649 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148"} err="failed to get container status \"12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148\": rpc error: code = NotFound desc = could not find container \"12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148\": container with ID starting with 12739f6ffabec58663e09916720e0ef3c72e6496608528738c709168c5005148 not found: ID does not exist" Nov 23 08:36:16 crc kubenswrapper[4988]: I1123 08:36:16.508523 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" path="/var/lib/kubelet/pods/998ff700-a63b-4178-b456-57ad8e5f36e5/volumes" Nov 23 08:36:19 crc kubenswrapper[4988]: I1123 08:36:19.845100 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:19 crc kubenswrapper[4988]: I1123 08:36:19.963453 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:20 crc kubenswrapper[4988]: I1123 08:36:20.092678 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:36:21 crc kubenswrapper[4988]: I1123 08:36:21.361346 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-96zxc" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="registry-server" containerID="cri-o://329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5" gracePeriod=2 Nov 23 08:36:21 crc kubenswrapper[4988]: I1123 08:36:21.891769 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.058166 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt2cp\" (UniqueName: \"kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp\") pod \"7fd4d09f-742e-4867-b801-1604620d3365\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.058246 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities\") pod \"7fd4d09f-742e-4867-b801-1604620d3365\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.058304 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content\") pod \"7fd4d09f-742e-4867-b801-1604620d3365\" (UID: \"7fd4d09f-742e-4867-b801-1604620d3365\") " Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.064505 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp" (OuterVolumeSpecName: "kube-api-access-kt2cp") pod "7fd4d09f-742e-4867-b801-1604620d3365" (UID: "7fd4d09f-742e-4867-b801-1604620d3365"). InnerVolumeSpecName "kube-api-access-kt2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.065142 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities" (OuterVolumeSpecName: "utilities") pod "7fd4d09f-742e-4867-b801-1604620d3365" (UID: "7fd4d09f-742e-4867-b801-1604620d3365"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.150828 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd4d09f-742e-4867-b801-1604620d3365" (UID: "7fd4d09f-742e-4867-b801-1604620d3365"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.161363 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt2cp\" (UniqueName: \"kubernetes.io/projected/7fd4d09f-742e-4867-b801-1604620d3365-kube-api-access-kt2cp\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.161409 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.161424 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd4d09f-742e-4867-b801-1604620d3365-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.370976 4988 generic.go:334] "Generic (PLEG): container finished" podID="7fd4d09f-742e-4867-b801-1604620d3365" containerID="329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5" exitCode=0 Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.371036 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerDied","Data":"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5"} Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.371076 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96zxc" event={"ID":"7fd4d09f-742e-4867-b801-1604620d3365","Type":"ContainerDied","Data":"79abf02115ecef82d3c0b0d437c0df239db0f6bc1ca71dd707bdeb968541fcb7"} Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.371106 4988 scope.go:117] "RemoveContainer" containerID="329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.371162 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96zxc" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.395682 4988 scope.go:117] "RemoveContainer" containerID="b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.419652 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.427519 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-96zxc"] Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.444736 4988 scope.go:117] "RemoveContainer" containerID="1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.479664 4988 scope.go:117] "RemoveContainer" containerID="329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5" Nov 23 08:36:22 crc kubenswrapper[4988]: E1123 08:36:22.480261 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5\": container with ID starting with 329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5 not found: ID does not exist" containerID="329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.480295 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5"} err="failed to get container status \"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5\": rpc error: code = NotFound desc = could not find container \"329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5\": container with ID starting with 329dec1094d40ed46a7bc8aefcdd804f186ab85ce9b4290c215ba830923278f5 not found: ID does not exist" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.480317 4988 scope.go:117] "RemoveContainer" containerID="b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b" Nov 23 08:36:22 crc kubenswrapper[4988]: E1123 08:36:22.480790 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b\": container with ID starting with b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b not found: ID does not exist" containerID="b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.480814 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b"} err="failed to get container status \"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b\": rpc error: code = NotFound desc = could not find container \"b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b\": container with ID starting with b190bcb8dcf8040809e7ae950bdc9db950e035d84eb955c3538e6f6fb51bd24b not found: ID does not exist" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.480827 4988 scope.go:117] "RemoveContainer" containerID="1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805" Nov 23 08:36:22 crc kubenswrapper[4988]: E1123 08:36:22.481055 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805\": container with ID starting with 1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805 not found: ID does not exist" containerID="1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.481156 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805"} err="failed to get container status \"1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805\": rpc error: code = NotFound desc = could not find container \"1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805\": container with ID starting with 1c435a070a275ae9dff83fd68bde8fb7df8e02ce77709292f5e8d8ab5d2ed805 not found: ID does not exist" Nov 23 08:36:22 crc kubenswrapper[4988]: I1123 08:36:22.507565 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd4d09f-742e-4867-b801-1604620d3365" path="/var/lib/kubelet/pods/7fd4d09f-742e-4867-b801-1604620d3365/volumes" Nov 23 08:36:24 crc kubenswrapper[4988]: I1123 08:36:24.497316 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:36:24 crc kubenswrapper[4988]: E1123 08:36:24.498025 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:36:35 crc kubenswrapper[4988]: I1123 08:36:35.497373 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:36:35 crc kubenswrapper[4988]: E1123 08:36:35.498695 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:36:47 crc kubenswrapper[4988]: I1123 08:36:47.496621 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:36:47 crc kubenswrapper[4988]: E1123 08:36:47.497771 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:36:58 crc kubenswrapper[4988]: I1123 08:36:58.521622 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:36:58 crc kubenswrapper[4988]: E1123 08:36:58.523044 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:37:13 crc kubenswrapper[4988]: I1123 08:37:13.496651 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:37:13 crc kubenswrapper[4988]: E1123 08:37:13.497470 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:37:24 crc kubenswrapper[4988]: I1123 08:37:24.497714 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:37:24 crc kubenswrapper[4988]: E1123 08:37:24.499245 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.054383 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-6x92q"] Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.071062 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-1f30-account-create-5plkb"] Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.082459 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-6x92q"] Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.094511 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-1f30-account-create-5plkb"] Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.514089 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4dee81-335a-48bb-aa1f-22a74fc5a3a8" path="/var/lib/kubelet/pods/0f4dee81-335a-48bb-aa1f-22a74fc5a3a8/volumes" Nov 23 08:37:32 crc kubenswrapper[4988]: I1123 08:37:32.515104 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c4cb7e-e9e5-417c-8ae2-60d15b0abd27" path="/var/lib/kubelet/pods/84c4cb7e-e9e5-417c-8ae2-60d15b0abd27/volumes" Nov 23 08:37:39 crc kubenswrapper[4988]: I1123 08:37:39.497772 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:37:39 crc kubenswrapper[4988]: E1123 08:37:39.498995 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:37:44 crc kubenswrapper[4988]: I1123 08:37:44.073977 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-mt9sl"] Nov 23 08:37:44 crc kubenswrapper[4988]: I1123 08:37:44.085360 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-mt9sl"] Nov 23 08:37:44 crc kubenswrapper[4988]: I1123 08:37:44.515567 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f4ea60-7e24-4d82-9109-0c8494a900ba" path="/var/lib/kubelet/pods/63f4ea60-7e24-4d82-9109-0c8494a900ba/volumes" Nov 23 08:37:50 crc kubenswrapper[4988]: I1123 08:37:50.496822 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:37:50 crc kubenswrapper[4988]: E1123 08:37:50.497536 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:38:01 crc kubenswrapper[4988]: I1123 08:38:01.496605 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:38:01 crc kubenswrapper[4988]: E1123 08:38:01.497563 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:38:03 crc kubenswrapper[4988]: I1123 08:38:03.113618 4988 scope.go:117] "RemoveContainer" containerID="6d0873795938a4bfdb3b38aac7a68a066db6fe941091449701e75bd7781c6f8d" Nov 23 08:38:03 crc kubenswrapper[4988]: I1123 08:38:03.139606 4988 scope.go:117] "RemoveContainer" containerID="0aac1b9b2f1434d02ec58308da5593793acc9f4e61b9bf1e0aafbd617272b7e4" Nov 23 08:38:03 crc kubenswrapper[4988]: I1123 08:38:03.197668 4988 scope.go:117] "RemoveContainer" containerID="6d1afb6861c4e1be1149b542c9375880ac9db63c06e92a2b6514ec5950499c90" Nov 23 08:38:12 crc kubenswrapper[4988]: I1123 08:38:12.497153 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:38:12 crc kubenswrapper[4988]: E1123 08:38:12.498676 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:38:27 crc kubenswrapper[4988]: I1123 08:38:27.495969 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:38:27 crc kubenswrapper[4988]: E1123 08:38:27.496705 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:38:42 crc kubenswrapper[4988]: I1123 08:38:42.496636 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:38:42 crc kubenswrapper[4988]: E1123 08:38:42.497940 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:38:53 crc kubenswrapper[4988]: I1123 08:38:53.497024 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:38:53 crc kubenswrapper[4988]: E1123 08:38:53.499630 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:39:06 crc kubenswrapper[4988]: I1123 08:39:06.496830 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:39:06 crc kubenswrapper[4988]: E1123 08:39:06.499977 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:39:21 crc kubenswrapper[4988]: I1123 08:39:21.497259 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:39:21 crc kubenswrapper[4988]: E1123 08:39:21.498706 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:39:36 crc kubenswrapper[4988]: I1123 08:39:36.496128 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:39:36 crc kubenswrapper[4988]: E1123 08:39:36.496922 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:39:39 crc kubenswrapper[4988]: I1123 08:39:39.557123 4988 generic.go:334] "Generic (PLEG): container finished" podID="7d3fd979-5f3d-43cf-a771-80668ab96673" containerID="8ca54edca6b9d93942e34490b979ab121890d93cda02899dc364d06668704483" exitCode=0 Nov 23 08:39:39 crc kubenswrapper[4988]: I1123 08:39:39.557300 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" event={"ID":"7d3fd979-5f3d-43cf-a771-80668ab96673","Type":"ContainerDied","Data":"8ca54edca6b9d93942e34490b979ab121890d93cda02899dc364d06668704483"} Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.042029 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.167578 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory\") pod \"7d3fd979-5f3d-43cf-a771-80668ab96673\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.167780 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srb76\" (UniqueName: \"kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76\") pod \"7d3fd979-5f3d-43cf-a771-80668ab96673\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.167818 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle\") pod \"7d3fd979-5f3d-43cf-a771-80668ab96673\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.167924 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key\") pod \"7d3fd979-5f3d-43cf-a771-80668ab96673\" (UID: \"7d3fd979-5f3d-43cf-a771-80668ab96673\") " Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.173960 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76" (OuterVolumeSpecName: "kube-api-access-srb76") pod "7d3fd979-5f3d-43cf-a771-80668ab96673" (UID: "7d3fd979-5f3d-43cf-a771-80668ab96673"). InnerVolumeSpecName "kube-api-access-srb76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.175122 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "7d3fd979-5f3d-43cf-a771-80668ab96673" (UID: "7d3fd979-5f3d-43cf-a771-80668ab96673"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.204214 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory" (OuterVolumeSpecName: "inventory") pod "7d3fd979-5f3d-43cf-a771-80668ab96673" (UID: "7d3fd979-5f3d-43cf-a771-80668ab96673"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.218473 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7d3fd979-5f3d-43cf-a771-80668ab96673" (UID: "7d3fd979-5f3d-43cf-a771-80668ab96673"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.272032 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srb76\" (UniqueName: \"kubernetes.io/projected/7d3fd979-5f3d-43cf-a771-80668ab96673-kube-api-access-srb76\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.272077 4988 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.272092 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.272105 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7d3fd979-5f3d-43cf-a771-80668ab96673-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.581541 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" event={"ID":"7d3fd979-5f3d-43cf-a771-80668ab96673","Type":"ContainerDied","Data":"1afc1f7c79a84d56d62b96fbc1dca8d0df92c3f42f1e9d891774636c1a91de75"} Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.581615 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1afc1f7c79a84d56d62b96fbc1dca8d0df92c3f42f1e9d891774636c1a91de75" Nov 23 08:39:41 crc kubenswrapper[4988]: I1123 08:39:41.581618 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp" Nov 23 08:39:49 crc kubenswrapper[4988]: I1123 08:39:49.496707 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:39:49 crc kubenswrapper[4988]: E1123 08:39:49.497349 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.157880 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-xn2d7"] Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.158999 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="extract-utilities" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159017 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="extract-utilities" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159037 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159044 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159058 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3fd979-5f3d-43cf-a771-80668ab96673" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159067 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3fd979-5f3d-43cf-a771-80668ab96673" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159079 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="extract-utilities" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159086 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="extract-utilities" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159112 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="extract-content" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159120 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="extract-content" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159156 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="extract-content" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159165 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="extract-content" Nov 23 08:39:53 crc kubenswrapper[4988]: E1123 08:39:53.159181 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.159856 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.160096 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3fd979-5f3d-43cf-a771-80668ab96673" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.160115 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="998ff700-a63b-4178-b456-57ad8e5f36e5" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.160152 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd4d09f-742e-4867-b801-1604620d3365" containerName="registry-server" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.161251 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.165854 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.165875 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-xn2d7"] Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.166013 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.168057 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.168101 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.322255 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.322319 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.322456 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.322488 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjzn\" (UniqueName: \"kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.424084 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.424134 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.424247 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.424275 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjzn\" (UniqueName: \"kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.431622 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.432774 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.433291 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.443295 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjzn\" (UniqueName: \"kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn\") pod \"bootstrap-openstack-openstack-cell1-xn2d7\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:53 crc kubenswrapper[4988]: I1123 08:39:53.491869 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:39:54 crc kubenswrapper[4988]: I1123 08:39:54.089412 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-xn2d7"] Nov 23 08:39:54 crc kubenswrapper[4988]: I1123 08:39:54.763660 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" event={"ID":"f045fce4-5fd2-4a28-8502-1b840639d64c","Type":"ContainerStarted","Data":"4924bd7b806dd382eed43285d28d874ffbd5e3768d293572cbb37281f687f87d"} Nov 23 08:39:55 crc kubenswrapper[4988]: I1123 08:39:55.774694 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" event={"ID":"f045fce4-5fd2-4a28-8502-1b840639d64c","Type":"ContainerStarted","Data":"83fdcdd32648dbfca870a2718a108199f27cccf12e91873641321f2df50a64ca"} Nov 23 08:39:55 crc kubenswrapper[4988]: I1123 08:39:55.816069 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" podStartSLOduration=2.037710111 podStartE2EDuration="2.816046586s" podCreationTimestamp="2025-11-23 08:39:53 +0000 UTC" firstStartedPulling="2025-11-23 08:39:54.108961628 +0000 UTC m=+6846.417474401" lastFinishedPulling="2025-11-23 08:39:54.887298103 +0000 UTC m=+6847.195810876" observedRunningTime="2025-11-23 08:39:55.803083158 +0000 UTC m=+6848.111595921" watchObservedRunningTime="2025-11-23 08:39:55.816046586 +0000 UTC m=+6848.124559359" Nov 23 08:40:01 crc kubenswrapper[4988]: I1123 08:40:01.497248 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:40:01 crc kubenswrapper[4988]: E1123 08:40:01.498620 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:40:13 crc kubenswrapper[4988]: I1123 08:40:13.496479 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:40:13 crc kubenswrapper[4988]: E1123 08:40:13.497310 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:40:25 crc kubenswrapper[4988]: I1123 08:40:25.496834 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:40:26 crc kubenswrapper[4988]: I1123 08:40:26.098344 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f"} Nov 23 08:42:51 crc kubenswrapper[4988]: I1123 08:42:51.672596 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:42:51 crc kubenswrapper[4988]: I1123 08:42:51.673222 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:42:57 crc kubenswrapper[4988]: I1123 08:42:57.797775 4988 generic.go:334] "Generic (PLEG): container finished" podID="f045fce4-5fd2-4a28-8502-1b840639d64c" containerID="83fdcdd32648dbfca870a2718a108199f27cccf12e91873641321f2df50a64ca" exitCode=0 Nov 23 08:42:57 crc kubenswrapper[4988]: I1123 08:42:57.797833 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" event={"ID":"f045fce4-5fd2-4a28-8502-1b840639d64c","Type":"ContainerDied","Data":"83fdcdd32648dbfca870a2718a108199f27cccf12e91873641321f2df50a64ca"} Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.305387 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.452762 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle\") pod \"f045fce4-5fd2-4a28-8502-1b840639d64c\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.453178 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key\") pod \"f045fce4-5fd2-4a28-8502-1b840639d64c\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.453309 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgjzn\" (UniqueName: \"kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn\") pod \"f045fce4-5fd2-4a28-8502-1b840639d64c\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.453405 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory\") pod \"f045fce4-5fd2-4a28-8502-1b840639d64c\" (UID: \"f045fce4-5fd2-4a28-8502-1b840639d64c\") " Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.465365 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f045fce4-5fd2-4a28-8502-1b840639d64c" (UID: "f045fce4-5fd2-4a28-8502-1b840639d64c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.479521 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn" (OuterVolumeSpecName: "kube-api-access-wgjzn") pod "f045fce4-5fd2-4a28-8502-1b840639d64c" (UID: "f045fce4-5fd2-4a28-8502-1b840639d64c"). InnerVolumeSpecName "kube-api-access-wgjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.483485 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f045fce4-5fd2-4a28-8502-1b840639d64c" (UID: "f045fce4-5fd2-4a28-8502-1b840639d64c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.489549 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory" (OuterVolumeSpecName: "inventory") pod "f045fce4-5fd2-4a28-8502-1b840639d64c" (UID: "f045fce4-5fd2-4a28-8502-1b840639d64c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.556652 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.556931 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgjzn\" (UniqueName: \"kubernetes.io/projected/f045fce4-5fd2-4a28-8502-1b840639d64c-kube-api-access-wgjzn\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.557026 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.557097 4988 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f045fce4-5fd2-4a28-8502-1b840639d64c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.822266 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" event={"ID":"f045fce4-5fd2-4a28-8502-1b840639d64c","Type":"ContainerDied","Data":"4924bd7b806dd382eed43285d28d874ffbd5e3768d293572cbb37281f687f87d"} Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.822318 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4924bd7b806dd382eed43285d28d874ffbd5e3768d293572cbb37281f687f87d" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.822397 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-xn2d7" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.985976 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-rjb52"] Nov 23 08:42:59 crc kubenswrapper[4988]: E1123 08:42:59.986405 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f045fce4-5fd2-4a28-8502-1b840639d64c" containerName="bootstrap-openstack-openstack-cell1" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.986416 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f045fce4-5fd2-4a28-8502-1b840639d64c" containerName="bootstrap-openstack-openstack-cell1" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.986603 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f045fce4-5fd2-4a28-8502-1b840639d64c" containerName="bootstrap-openstack-openstack-cell1" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.987258 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.996119 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.996294 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.996468 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:42:59 crc kubenswrapper[4988]: I1123 08:42:59.996567 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.009541 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-rjb52"] Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.073562 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.073716 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.073810 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96pr\" (UniqueName: \"kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.175791 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.175886 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g96pr\" (UniqueName: \"kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.175949 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.179388 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.185153 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.192046 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g96pr\" (UniqueName: \"kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr\") pod \"download-cache-openstack-openstack-cell1-rjb52\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.319700 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.862706 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-rjb52"] Nov 23 08:43:00 crc kubenswrapper[4988]: I1123 08:43:00.880621 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.635347 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.638732 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.659818 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.716456 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd75c\" (UniqueName: \"kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.716518 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.716744 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.818317 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.818379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd75c\" (UniqueName: \"kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.818403 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.818901 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.818986 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.842605 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd75c\" (UniqueName: \"kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c\") pod \"community-operators-r28rl\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.845938 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" event={"ID":"84c4086a-6e6a-4f2a-8fc2-b5416199c070","Type":"ContainerStarted","Data":"d409fc05aee4268509a6bdb2c9febd267caa4c9036f8e3d54d7834fa25de9d07"} Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.845987 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" event={"ID":"84c4086a-6e6a-4f2a-8fc2-b5416199c070","Type":"ContainerStarted","Data":"69cb66996d056b717c7cbf677b500bae1b8eb4b428f69fedfba2a844d043fb61"} Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.868816 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" podStartSLOduration=2.445749718 podStartE2EDuration="2.868790754s" podCreationTimestamp="2025-11-23 08:42:59 +0000 UTC" firstStartedPulling="2025-11-23 08:43:00.870049989 +0000 UTC m=+7033.178562752" lastFinishedPulling="2025-11-23 08:43:01.293090995 +0000 UTC m=+7033.601603788" observedRunningTime="2025-11-23 08:43:01.860547221 +0000 UTC m=+7034.169059984" watchObservedRunningTime="2025-11-23 08:43:01.868790754 +0000 UTC m=+7034.177303517" Nov 23 08:43:01 crc kubenswrapper[4988]: I1123 08:43:01.969110 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:02 crc kubenswrapper[4988]: I1123 08:43:02.529073 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:02 crc kubenswrapper[4988]: I1123 08:43:02.858223 4988 generic.go:334] "Generic (PLEG): container finished" podID="db604426-6dab-49b9-bfea-1ab872698be3" containerID="9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38" exitCode=0 Nov 23 08:43:02 crc kubenswrapper[4988]: I1123 08:43:02.858476 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerDied","Data":"9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38"} Nov 23 08:43:02 crc kubenswrapper[4988]: I1123 08:43:02.858538 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerStarted","Data":"8e8c369e7d88596d8f421716d1ad945854bbe763f7e1bd12cf6eaa96582ce3bf"} Nov 23 08:43:03 crc kubenswrapper[4988]: I1123 08:43:03.873451 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerStarted","Data":"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881"} Nov 23 08:43:05 crc kubenswrapper[4988]: I1123 08:43:05.897082 4988 generic.go:334] "Generic (PLEG): container finished" podID="db604426-6dab-49b9-bfea-1ab872698be3" containerID="cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881" exitCode=0 Nov 23 08:43:05 crc kubenswrapper[4988]: I1123 08:43:05.897169 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerDied","Data":"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881"} Nov 23 08:43:06 crc kubenswrapper[4988]: I1123 08:43:06.908424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerStarted","Data":"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9"} Nov 23 08:43:06 crc kubenswrapper[4988]: I1123 08:43:06.929244 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r28rl" podStartSLOduration=2.458868945 podStartE2EDuration="5.929223935s" podCreationTimestamp="2025-11-23 08:43:01 +0000 UTC" firstStartedPulling="2025-11-23 08:43:02.864117026 +0000 UTC m=+7035.172629789" lastFinishedPulling="2025-11-23 08:43:06.334472016 +0000 UTC m=+7038.642984779" observedRunningTime="2025-11-23 08:43:06.924251663 +0000 UTC m=+7039.232764426" watchObservedRunningTime="2025-11-23 08:43:06.929223935 +0000 UTC m=+7039.237736698" Nov 23 08:43:11 crc kubenswrapper[4988]: I1123 08:43:11.969717 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:11 crc kubenswrapper[4988]: I1123 08:43:11.970066 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:12 crc kubenswrapper[4988]: I1123 08:43:12.026938 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:13 crc kubenswrapper[4988]: I1123 08:43:13.038519 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:13 crc kubenswrapper[4988]: I1123 08:43:13.097238 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.010678 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r28rl" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="registry-server" containerID="cri-o://15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9" gracePeriod=2 Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.483878 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.543218 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content\") pod \"db604426-6dab-49b9-bfea-1ab872698be3\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.543307 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd75c\" (UniqueName: \"kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c\") pod \"db604426-6dab-49b9-bfea-1ab872698be3\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.543390 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities\") pod \"db604426-6dab-49b9-bfea-1ab872698be3\" (UID: \"db604426-6dab-49b9-bfea-1ab872698be3\") " Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.546258 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities" (OuterVolumeSpecName: "utilities") pod "db604426-6dab-49b9-bfea-1ab872698be3" (UID: "db604426-6dab-49b9-bfea-1ab872698be3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.566741 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c" (OuterVolumeSpecName: "kube-api-access-zd75c") pod "db604426-6dab-49b9-bfea-1ab872698be3" (UID: "db604426-6dab-49b9-bfea-1ab872698be3"). InnerVolumeSpecName "kube-api-access-zd75c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.593548 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db604426-6dab-49b9-bfea-1ab872698be3" (UID: "db604426-6dab-49b9-bfea-1ab872698be3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.645003 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.645047 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd75c\" (UniqueName: \"kubernetes.io/projected/db604426-6dab-49b9-bfea-1ab872698be3-kube-api-access-zd75c\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:15 crc kubenswrapper[4988]: I1123 08:43:15.645060 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db604426-6dab-49b9-bfea-1ab872698be3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.028460 4988 generic.go:334] "Generic (PLEG): container finished" podID="db604426-6dab-49b9-bfea-1ab872698be3" containerID="15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9" exitCode=0 Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.028582 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerDied","Data":"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9"} Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.028930 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r28rl" event={"ID":"db604426-6dab-49b9-bfea-1ab872698be3","Type":"ContainerDied","Data":"8e8c369e7d88596d8f421716d1ad945854bbe763f7e1bd12cf6eaa96582ce3bf"} Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.028962 4988 scope.go:117] "RemoveContainer" containerID="15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.028639 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r28rl" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.060408 4988 scope.go:117] "RemoveContainer" containerID="cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.092633 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.100555 4988 scope.go:117] "RemoveContainer" containerID="9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.104071 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r28rl"] Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.156712 4988 scope.go:117] "RemoveContainer" containerID="15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9" Nov 23 08:43:16 crc kubenswrapper[4988]: E1123 08:43:16.157284 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9\": container with ID starting with 15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9 not found: ID does not exist" containerID="15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.157333 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9"} err="failed to get container status \"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9\": rpc error: code = NotFound desc = could not find container \"15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9\": container with ID starting with 15900412b1f7be520ab239acb0a78cdd7483ac82bf562b13f50f5d90f5828cb9 not found: ID does not exist" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.157364 4988 scope.go:117] "RemoveContainer" containerID="cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881" Nov 23 08:43:16 crc kubenswrapper[4988]: E1123 08:43:16.157917 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881\": container with ID starting with cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881 not found: ID does not exist" containerID="cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.157968 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881"} err="failed to get container status \"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881\": rpc error: code = NotFound desc = could not find container \"cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881\": container with ID starting with cc30f7a67b007fd7f4d4ee842cce1f9909bfd28f6f8154d73766b45ffe172881 not found: ID does not exist" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.158003 4988 scope.go:117] "RemoveContainer" containerID="9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38" Nov 23 08:43:16 crc kubenswrapper[4988]: E1123 08:43:16.158489 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38\": container with ID starting with 9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38 not found: ID does not exist" containerID="9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.158517 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38"} err="failed to get container status \"9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38\": rpc error: code = NotFound desc = could not find container \"9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38\": container with ID starting with 9a253db059f68ee25e56e32066f5ba25ca7ee80d4c57d5a2b633ca7c3abaad38 not found: ID does not exist" Nov 23 08:43:16 crc kubenswrapper[4988]: I1123 08:43:16.508042 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db604426-6dab-49b9-bfea-1ab872698be3" path="/var/lib/kubelet/pods/db604426-6dab-49b9-bfea-1ab872698be3/volumes" Nov 23 08:43:21 crc kubenswrapper[4988]: I1123 08:43:21.672596 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:43:21 crc kubenswrapper[4988]: I1123 08:43:21.673069 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:43:51 crc kubenswrapper[4988]: I1123 08:43:51.672767 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:43:51 crc kubenswrapper[4988]: I1123 08:43:51.673526 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:43:51 crc kubenswrapper[4988]: I1123 08:43:51.673596 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:43:51 crc kubenswrapper[4988]: I1123 08:43:51.674752 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:43:51 crc kubenswrapper[4988]: I1123 08:43:51.674891 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f" gracePeriod=600 Nov 23 08:43:52 crc kubenswrapper[4988]: I1123 08:43:52.408269 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f" exitCode=0 Nov 23 08:43:52 crc kubenswrapper[4988]: I1123 08:43:52.408345 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f"} Nov 23 08:43:52 crc kubenswrapper[4988]: I1123 08:43:52.408981 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b"} Nov 23 08:43:52 crc kubenswrapper[4988]: I1123 08:43:52.409009 4988 scope.go:117] "RemoveContainer" containerID="502faba22db4948564adccbba2b30f8712f3ed63a1ab38aa600010c6ee55a00f" Nov 23 08:44:32 crc kubenswrapper[4988]: I1123 08:44:32.850468 4988 generic.go:334] "Generic (PLEG): container finished" podID="84c4086a-6e6a-4f2a-8fc2-b5416199c070" containerID="d409fc05aee4268509a6bdb2c9febd267caa4c9036f8e3d54d7834fa25de9d07" exitCode=0 Nov 23 08:44:32 crc kubenswrapper[4988]: I1123 08:44:32.850540 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" event={"ID":"84c4086a-6e6a-4f2a-8fc2-b5416199c070","Type":"ContainerDied","Data":"d409fc05aee4268509a6bdb2c9febd267caa4c9036f8e3d54d7834fa25de9d07"} Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.357242 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.507716 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key\") pod \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.507764 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory\") pod \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.507835 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g96pr\" (UniqueName: \"kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr\") pod \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\" (UID: \"84c4086a-6e6a-4f2a-8fc2-b5416199c070\") " Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.514308 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr" (OuterVolumeSpecName: "kube-api-access-g96pr") pod "84c4086a-6e6a-4f2a-8fc2-b5416199c070" (UID: "84c4086a-6e6a-4f2a-8fc2-b5416199c070"). InnerVolumeSpecName "kube-api-access-g96pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.546495 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "84c4086a-6e6a-4f2a-8fc2-b5416199c070" (UID: "84c4086a-6e6a-4f2a-8fc2-b5416199c070"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.563493 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory" (OuterVolumeSpecName: "inventory") pod "84c4086a-6e6a-4f2a-8fc2-b5416199c070" (UID: "84c4086a-6e6a-4f2a-8fc2-b5416199c070"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.609798 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.609828 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84c4086a-6e6a-4f2a-8fc2-b5416199c070-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.609838 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g96pr\" (UniqueName: \"kubernetes.io/projected/84c4086a-6e6a-4f2a-8fc2-b5416199c070-kube-api-access-g96pr\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.871906 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" event={"ID":"84c4086a-6e6a-4f2a-8fc2-b5416199c070","Type":"ContainerDied","Data":"69cb66996d056b717c7cbf677b500bae1b8eb4b428f69fedfba2a844d043fb61"} Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.871949 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69cb66996d056b717c7cbf677b500bae1b8eb4b428f69fedfba2a844d043fb61" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.872319 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-rjb52" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.963782 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-z8lv6"] Nov 23 08:44:34 crc kubenswrapper[4988]: E1123 08:44:34.964846 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="extract-utilities" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.965305 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="extract-utilities" Nov 23 08:44:34 crc kubenswrapper[4988]: E1123 08:44:34.965410 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c4086a-6e6a-4f2a-8fc2-b5416199c070" containerName="download-cache-openstack-openstack-cell1" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.965506 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c4086a-6e6a-4f2a-8fc2-b5416199c070" containerName="download-cache-openstack-openstack-cell1" Nov 23 08:44:34 crc kubenswrapper[4988]: E1123 08:44:34.965568 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="extract-content" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.965627 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="extract-content" Nov 23 08:44:34 crc kubenswrapper[4988]: E1123 08:44:34.965690 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="registry-server" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.965741 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="registry-server" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.966052 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c4086a-6e6a-4f2a-8fc2-b5416199c070" containerName="download-cache-openstack-openstack-cell1" Nov 23 08:44:34 crc kubenswrapper[4988]: I1123 08:44:34.966263 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="db604426-6dab-49b9-bfea-1ab872698be3" containerName="registry-server" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.009179 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-z8lv6"] Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.009372 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.012010 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.013505 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.013903 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.014028 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.120560 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.120621 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrtrw\" (UniqueName: \"kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.121153 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.222826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrtrw\" (UniqueName: \"kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.223127 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.223311 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.226885 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.227223 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.240123 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrtrw\" (UniqueName: \"kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw\") pod \"configure-network-openstack-openstack-cell1-z8lv6\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.343416 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:44:35 crc kubenswrapper[4988]: W1123 08:44:35.880379 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad1eff7_64f5_4009_8142_48fd2984fa39.slice/crio-0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc WatchSource:0}: Error finding container 0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc: Status 404 returned error can't find the container with id 0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc Nov 23 08:44:35 crc kubenswrapper[4988]: I1123 08:44:35.882043 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-z8lv6"] Nov 23 08:44:36 crc kubenswrapper[4988]: I1123 08:44:36.894316 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" event={"ID":"4ad1eff7-64f5-4009-8142-48fd2984fa39","Type":"ContainerStarted","Data":"3ab04b64715422f4cf70db26a4f15ba8ea76cb923b166fe9e640f495259634ec"} Nov 23 08:44:36 crc kubenswrapper[4988]: I1123 08:44:36.894647 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" event={"ID":"4ad1eff7-64f5-4009-8142-48fd2984fa39","Type":"ContainerStarted","Data":"0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc"} Nov 23 08:44:36 crc kubenswrapper[4988]: I1123 08:44:36.917807 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" podStartSLOduration=2.35638362 podStartE2EDuration="2.917783469s" podCreationTimestamp="2025-11-23 08:44:34 +0000 UTC" firstStartedPulling="2025-11-23 08:44:35.882656858 +0000 UTC m=+7128.191169621" lastFinishedPulling="2025-11-23 08:44:36.444056697 +0000 UTC m=+7128.752569470" observedRunningTime="2025-11-23 08:44:36.909145757 +0000 UTC m=+7129.217658530" watchObservedRunningTime="2025-11-23 08:44:36.917783469 +0000 UTC m=+7129.226296232" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.175172 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx"] Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.177675 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.180402 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.187843 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.190285 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx"] Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.251736 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.251808 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t784s\" (UniqueName: \"kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.252006 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.354367 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.355083 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t784s\" (UniqueName: \"kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.355165 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.356532 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.364002 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.385749 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t784s\" (UniqueName: \"kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s\") pod \"collect-profiles-29398125-244cx\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.508174 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:00 crc kubenswrapper[4988]: W1123 08:45:00.966055 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ae8f69_6dc3_46d3_8a02_b04053381a7d.slice/crio-4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369 WatchSource:0}: Error finding container 4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369: Status 404 returned error can't find the container with id 4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369 Nov 23 08:45:00 crc kubenswrapper[4988]: I1123 08:45:00.966328 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx"] Nov 23 08:45:01 crc kubenswrapper[4988]: I1123 08:45:01.148139 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" event={"ID":"22ae8f69-6dc3-46d3-8a02-b04053381a7d","Type":"ContainerStarted","Data":"82ff6a8af624a8dc96afa67105deda9ca7bea70b7be9c8d71bc97ad1712edee5"} Nov 23 08:45:01 crc kubenswrapper[4988]: I1123 08:45:01.148458 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" event={"ID":"22ae8f69-6dc3-46d3-8a02-b04053381a7d","Type":"ContainerStarted","Data":"4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369"} Nov 23 08:45:01 crc kubenswrapper[4988]: I1123 08:45:01.173033 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" podStartSLOduration=1.173016259 podStartE2EDuration="1.173016259s" podCreationTimestamp="2025-11-23 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:01.164061779 +0000 UTC m=+7153.472574562" watchObservedRunningTime="2025-11-23 08:45:01.173016259 +0000 UTC m=+7153.481529022" Nov 23 08:45:02 crc kubenswrapper[4988]: I1123 08:45:02.161836 4988 generic.go:334] "Generic (PLEG): container finished" podID="22ae8f69-6dc3-46d3-8a02-b04053381a7d" containerID="82ff6a8af624a8dc96afa67105deda9ca7bea70b7be9c8d71bc97ad1712edee5" exitCode=0 Nov 23 08:45:02 crc kubenswrapper[4988]: I1123 08:45:02.162038 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" event={"ID":"22ae8f69-6dc3-46d3-8a02-b04053381a7d","Type":"ContainerDied","Data":"82ff6a8af624a8dc96afa67105deda9ca7bea70b7be9c8d71bc97ad1712edee5"} Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.526645 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.647649 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t784s\" (UniqueName: \"kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s\") pod \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.647799 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume\") pod \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.647987 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume\") pod \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\" (UID: \"22ae8f69-6dc3-46d3-8a02-b04053381a7d\") " Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.648652 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume" (OuterVolumeSpecName: "config-volume") pod "22ae8f69-6dc3-46d3-8a02-b04053381a7d" (UID: "22ae8f69-6dc3-46d3-8a02-b04053381a7d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.653915 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s" (OuterVolumeSpecName: "kube-api-access-t784s") pod "22ae8f69-6dc3-46d3-8a02-b04053381a7d" (UID: "22ae8f69-6dc3-46d3-8a02-b04053381a7d"). InnerVolumeSpecName "kube-api-access-t784s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.658383 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22ae8f69-6dc3-46d3-8a02-b04053381a7d" (UID: "22ae8f69-6dc3-46d3-8a02-b04053381a7d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.751333 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t784s\" (UniqueName: \"kubernetes.io/projected/22ae8f69-6dc3-46d3-8a02-b04053381a7d-kube-api-access-t784s\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.751376 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22ae8f69-6dc3-46d3-8a02-b04053381a7d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[4988]: I1123 08:45:03.751403 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22ae8f69-6dc3-46d3-8a02-b04053381a7d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.184881 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" event={"ID":"22ae8f69-6dc3-46d3-8a02-b04053381a7d","Type":"ContainerDied","Data":"4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369"} Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.184926 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc891a33981f1689098021faf5e683fd6628271722b75972f8f9aa3b9702369" Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.184955 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx" Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.252949 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr"] Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.263146 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-9frzr"] Nov 23 08:45:04 crc kubenswrapper[4988]: E1123 08:45:04.426995 4988 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ae8f69_6dc3_46d3_8a02_b04053381a7d.slice\": RecentStats: unable to find data in memory cache]" Nov 23 08:45:04 crc kubenswrapper[4988]: I1123 08:45:04.507061 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="909f8f22-cb33-4129-a454-9b609e93c248" path="/var/lib/kubelet/pods/909f8f22-cb33-4129-a454-9b609e93c248/volumes" Nov 23 08:45:55 crc kubenswrapper[4988]: I1123 08:45:55.807138 4988 generic.go:334] "Generic (PLEG): container finished" podID="4ad1eff7-64f5-4009-8142-48fd2984fa39" containerID="3ab04b64715422f4cf70db26a4f15ba8ea76cb923b166fe9e640f495259634ec" exitCode=0 Nov 23 08:45:55 crc kubenswrapper[4988]: I1123 08:45:55.807641 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" event={"ID":"4ad1eff7-64f5-4009-8142-48fd2984fa39","Type":"ContainerDied","Data":"3ab04b64715422f4cf70db26a4f15ba8ea76cb923b166fe9e640f495259634ec"} Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.244157 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.423072 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key\") pod \"4ad1eff7-64f5-4009-8142-48fd2984fa39\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.423232 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory\") pod \"4ad1eff7-64f5-4009-8142-48fd2984fa39\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.423418 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrtrw\" (UniqueName: \"kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw\") pod \"4ad1eff7-64f5-4009-8142-48fd2984fa39\" (UID: \"4ad1eff7-64f5-4009-8142-48fd2984fa39\") " Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.433026 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw" (OuterVolumeSpecName: "kube-api-access-qrtrw") pod "4ad1eff7-64f5-4009-8142-48fd2984fa39" (UID: "4ad1eff7-64f5-4009-8142-48fd2984fa39"). InnerVolumeSpecName "kube-api-access-qrtrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.457741 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory" (OuterVolumeSpecName: "inventory") pod "4ad1eff7-64f5-4009-8142-48fd2984fa39" (UID: "4ad1eff7-64f5-4009-8142-48fd2984fa39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.476480 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4ad1eff7-64f5-4009-8142-48fd2984fa39" (UID: "4ad1eff7-64f5-4009-8142-48fd2984fa39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.526352 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrtrw\" (UniqueName: \"kubernetes.io/projected/4ad1eff7-64f5-4009-8142-48fd2984fa39-kube-api-access-qrtrw\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.526410 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.526433 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4ad1eff7-64f5-4009-8142-48fd2984fa39-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.879997 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" event={"ID":"4ad1eff7-64f5-4009-8142-48fd2984fa39","Type":"ContainerDied","Data":"0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc"} Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.880374 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0787b754481171f866102dfc76c2e17c310e4ba1e89841f8b163c3bf30820dbc" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.880454 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-z8lv6" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.928631 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wxq47"] Nov 23 08:45:57 crc kubenswrapper[4988]: E1123 08:45:57.929028 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad1eff7-64f5-4009-8142-48fd2984fa39" containerName="configure-network-openstack-openstack-cell1" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.929040 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad1eff7-64f5-4009-8142-48fd2984fa39" containerName="configure-network-openstack-openstack-cell1" Nov 23 08:45:57 crc kubenswrapper[4988]: E1123 08:45:57.929070 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ae8f69-6dc3-46d3-8a02-b04053381a7d" containerName="collect-profiles" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.929075 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ae8f69-6dc3-46d3-8a02-b04053381a7d" containerName="collect-profiles" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.929293 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ae8f69-6dc3-46d3-8a02-b04053381a7d" containerName="collect-profiles" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.929309 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad1eff7-64f5-4009-8142-48fd2984fa39" containerName="configure-network-openstack-openstack-cell1" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.929968 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.932052 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.933572 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.933769 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.935509 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:45:57 crc kubenswrapper[4988]: I1123 08:45:57.943859 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wxq47"] Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.039596 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.039678 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnr54\" (UniqueName: \"kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.039830 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.141408 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnr54\" (UniqueName: \"kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.141483 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.141661 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.147183 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.147842 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.159992 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnr54\" (UniqueName: \"kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54\") pod \"validate-network-openstack-openstack-cell1-wxq47\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.255730 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:45:58 crc kubenswrapper[4988]: I1123 08:45:58.875938 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-wxq47"] Nov 23 08:45:58 crc kubenswrapper[4988]: W1123 08:45:58.885072 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde31961d_df81_4271_9536_e427f3b15766.slice/crio-957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767 WatchSource:0}: Error finding container 957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767: Status 404 returned error can't find the container with id 957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767 Nov 23 08:45:59 crc kubenswrapper[4988]: I1123 08:45:59.903427 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" event={"ID":"de31961d-df81-4271-9536-e427f3b15766","Type":"ContainerStarted","Data":"d6cf015af692f74108b1b0fe9836da8b3ea60a705e8378a8a27d474e5b61e0f3"} Nov 23 08:45:59 crc kubenswrapper[4988]: I1123 08:45:59.904183 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" event={"ID":"de31961d-df81-4271-9536-e427f3b15766","Type":"ContainerStarted","Data":"957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767"} Nov 23 08:45:59 crc kubenswrapper[4988]: I1123 08:45:59.934511 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" podStartSLOduration=2.5105298190000003 podStartE2EDuration="2.934474797s" podCreationTimestamp="2025-11-23 08:45:57 +0000 UTC" firstStartedPulling="2025-11-23 08:45:58.887856373 +0000 UTC m=+7211.196369136" lastFinishedPulling="2025-11-23 08:45:59.311801341 +0000 UTC m=+7211.620314114" observedRunningTime="2025-11-23 08:45:59.922176954 +0000 UTC m=+7212.230689787" watchObservedRunningTime="2025-11-23 08:45:59.934474797 +0000 UTC m=+7212.242987610" Nov 23 08:46:03 crc kubenswrapper[4988]: I1123 08:46:03.459160 4988 scope.go:117] "RemoveContainer" containerID="73d45c0aeedc1159ee98e99105235e399fe8b7dba7b80ef73ee46ef91f998d65" Nov 23 08:46:04 crc kubenswrapper[4988]: I1123 08:46:04.963047 4988 generic.go:334] "Generic (PLEG): container finished" podID="de31961d-df81-4271-9536-e427f3b15766" containerID="d6cf015af692f74108b1b0fe9836da8b3ea60a705e8378a8a27d474e5b61e0f3" exitCode=0 Nov 23 08:46:04 crc kubenswrapper[4988]: I1123 08:46:04.963145 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" event={"ID":"de31961d-df81-4271-9536-e427f3b15766","Type":"ContainerDied","Data":"d6cf015af692f74108b1b0fe9836da8b3ea60a705e8378a8a27d474e5b61e0f3"} Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.406507 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.537017 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnr54\" (UniqueName: \"kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54\") pod \"de31961d-df81-4271-9536-e427f3b15766\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.537360 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key\") pod \"de31961d-df81-4271-9536-e427f3b15766\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.537417 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory\") pod \"de31961d-df81-4271-9536-e427f3b15766\" (UID: \"de31961d-df81-4271-9536-e427f3b15766\") " Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.546431 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54" (OuterVolumeSpecName: "kube-api-access-vnr54") pod "de31961d-df81-4271-9536-e427f3b15766" (UID: "de31961d-df81-4271-9536-e427f3b15766"). InnerVolumeSpecName "kube-api-access-vnr54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.576628 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de31961d-df81-4271-9536-e427f3b15766" (UID: "de31961d-df81-4271-9536-e427f3b15766"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.612255 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory" (OuterVolumeSpecName: "inventory") pod "de31961d-df81-4271-9536-e427f3b15766" (UID: "de31961d-df81-4271-9536-e427f3b15766"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.641477 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnr54\" (UniqueName: \"kubernetes.io/projected/de31961d-df81-4271-9536-e427f3b15766-kube-api-access-vnr54\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.641718 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.641890 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de31961d-df81-4271-9536-e427f3b15766-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.989347 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" event={"ID":"de31961d-df81-4271-9536-e427f3b15766","Type":"ContainerDied","Data":"957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767"} Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.989427 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="957aa1d044c1210be30f62589f5844bb878999d238b780ea029dc33c88c22767" Nov 23 08:46:06 crc kubenswrapper[4988]: I1123 08:46:06.989514 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-wxq47" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.173216 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5f9wv"] Nov 23 08:46:07 crc kubenswrapper[4988]: E1123 08:46:07.173726 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de31961d-df81-4271-9536-e427f3b15766" containerName="validate-network-openstack-openstack-cell1" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.173747 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="de31961d-df81-4271-9536-e427f3b15766" containerName="validate-network-openstack-openstack-cell1" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.174038 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="de31961d-df81-4271-9536-e427f3b15766" containerName="validate-network-openstack-openstack-cell1" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.174829 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.178170 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.179692 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.185623 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5f9wv"] Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.185753 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.186836 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.254304 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.254932 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.255377 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq98s\" (UniqueName: \"kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.357826 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq98s\" (UniqueName: \"kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.358109 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.358221 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.362637 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.364043 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.379151 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq98s\" (UniqueName: \"kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s\") pod \"install-os-openstack-openstack-cell1-5f9wv\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:07 crc kubenswrapper[4988]: I1123 08:46:07.508306 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:08 crc kubenswrapper[4988]: I1123 08:46:08.094103 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-5f9wv"] Nov 23 08:46:09 crc kubenswrapper[4988]: I1123 08:46:09.012991 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" event={"ID":"f89ffe71-9a0e-46e6-b982-02107da4ea39","Type":"ContainerStarted","Data":"232012a1ffd08cd0b74616455ccc3e74cc1c3f4e9f5a978392d9db0fb6e2ad55"} Nov 23 08:46:09 crc kubenswrapper[4988]: I1123 08:46:09.013627 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" event={"ID":"f89ffe71-9a0e-46e6-b982-02107da4ea39","Type":"ContainerStarted","Data":"1173d721311673ceb4a27d4f17701a0253e2708c37529b9d3eea3108b1b16a44"} Nov 23 08:46:09 crc kubenswrapper[4988]: I1123 08:46:09.043160 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" podStartSLOduration=1.6252215209999998 podStartE2EDuration="2.043134891s" podCreationTimestamp="2025-11-23 08:46:07 +0000 UTC" firstStartedPulling="2025-11-23 08:46:08.10439241 +0000 UTC m=+7220.412905173" lastFinishedPulling="2025-11-23 08:46:08.52230578 +0000 UTC m=+7220.830818543" observedRunningTime="2025-11-23 08:46:09.038008314 +0000 UTC m=+7221.346521117" watchObservedRunningTime="2025-11-23 08:46:09.043134891 +0000 UTC m=+7221.351647664" Nov 23 08:46:21 crc kubenswrapper[4988]: I1123 08:46:21.672825 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:46:21 crc kubenswrapper[4988]: I1123 08:46:21.673346 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.821588 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.824678 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.835923 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.981862 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.981974 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbbj\" (UniqueName: \"kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:40 crc kubenswrapper[4988]: I1123 08:46:40.982039 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.083806 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbbj\" (UniqueName: \"kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.083903 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.084072 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.084432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.084477 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.115296 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbbj\" (UniqueName: \"kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj\") pod \"redhat-operators-kqwmm\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.158160 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:41 crc kubenswrapper[4988]: I1123 08:46:41.713473 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:46:42 crc kubenswrapper[4988]: I1123 08:46:42.383591 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerID="7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c" exitCode=0 Nov 23 08:46:42 crc kubenswrapper[4988]: I1123 08:46:42.383827 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerDied","Data":"7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c"} Nov 23 08:46:42 crc kubenswrapper[4988]: I1123 08:46:42.383851 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerStarted","Data":"2ba089ca3ec446eac1a330e63217f2be325d905a3cf4ee9cca5106d4d6f8a9cb"} Nov 23 08:46:44 crc kubenswrapper[4988]: I1123 08:46:44.423501 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerStarted","Data":"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54"} Nov 23 08:46:47 crc kubenswrapper[4988]: I1123 08:46:47.455576 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerID="d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54" exitCode=0 Nov 23 08:46:47 crc kubenswrapper[4988]: I1123 08:46:47.455681 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerDied","Data":"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54"} Nov 23 08:46:48 crc kubenswrapper[4988]: I1123 08:46:48.469216 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerStarted","Data":"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f"} Nov 23 08:46:48 crc kubenswrapper[4988]: I1123 08:46:48.515504 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kqwmm" podStartSLOduration=3.016514648 podStartE2EDuration="8.515485555s" podCreationTimestamp="2025-11-23 08:46:40 +0000 UTC" firstStartedPulling="2025-11-23 08:46:42.385630801 +0000 UTC m=+7254.694143564" lastFinishedPulling="2025-11-23 08:46:47.884601708 +0000 UTC m=+7260.193114471" observedRunningTime="2025-11-23 08:46:48.490281995 +0000 UTC m=+7260.798794768" watchObservedRunningTime="2025-11-23 08:46:48.515485555 +0000 UTC m=+7260.823998328" Nov 23 08:46:51 crc kubenswrapper[4988]: I1123 08:46:51.158812 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:51 crc kubenswrapper[4988]: I1123 08:46:51.159990 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:46:51 crc kubenswrapper[4988]: I1123 08:46:51.672071 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:46:51 crc kubenswrapper[4988]: I1123 08:46:51.672506 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:46:52 crc kubenswrapper[4988]: I1123 08:46:52.233293 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kqwmm" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="registry-server" probeResult="failure" output=< Nov 23 08:46:52 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:46:52 crc kubenswrapper[4988]: > Nov 23 08:46:56 crc kubenswrapper[4988]: I1123 08:46:56.557420 4988 generic.go:334] "Generic (PLEG): container finished" podID="f89ffe71-9a0e-46e6-b982-02107da4ea39" containerID="232012a1ffd08cd0b74616455ccc3e74cc1c3f4e9f5a978392d9db0fb6e2ad55" exitCode=0 Nov 23 08:46:56 crc kubenswrapper[4988]: I1123 08:46:56.557470 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" event={"ID":"f89ffe71-9a0e-46e6-b982-02107da4ea39","Type":"ContainerDied","Data":"232012a1ffd08cd0b74616455ccc3e74cc1c3f4e9f5a978392d9db0fb6e2ad55"} Nov 23 08:46:57 crc kubenswrapper[4988]: I1123 08:46:57.996662 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.068167 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq98s\" (UniqueName: \"kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s\") pod \"f89ffe71-9a0e-46e6-b982-02107da4ea39\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.068236 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key\") pod \"f89ffe71-9a0e-46e6-b982-02107da4ea39\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.068266 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory\") pod \"f89ffe71-9a0e-46e6-b982-02107da4ea39\" (UID: \"f89ffe71-9a0e-46e6-b982-02107da4ea39\") " Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.074268 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s" (OuterVolumeSpecName: "kube-api-access-fq98s") pod "f89ffe71-9a0e-46e6-b982-02107da4ea39" (UID: "f89ffe71-9a0e-46e6-b982-02107da4ea39"). InnerVolumeSpecName "kube-api-access-fq98s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.097824 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f89ffe71-9a0e-46e6-b982-02107da4ea39" (UID: "f89ffe71-9a0e-46e6-b982-02107da4ea39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.098164 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory" (OuterVolumeSpecName: "inventory") pod "f89ffe71-9a0e-46e6-b982-02107da4ea39" (UID: "f89ffe71-9a0e-46e6-b982-02107da4ea39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.169578 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq98s\" (UniqueName: \"kubernetes.io/projected/f89ffe71-9a0e-46e6-b982-02107da4ea39-kube-api-access-fq98s\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.169606 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.169647 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f89ffe71-9a0e-46e6-b982-02107da4ea39-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.583278 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" event={"ID":"f89ffe71-9a0e-46e6-b982-02107da4ea39","Type":"ContainerDied","Data":"1173d721311673ceb4a27d4f17701a0253e2708c37529b9d3eea3108b1b16a44"} Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.583321 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1173d721311673ceb4a27d4f17701a0253e2708c37529b9d3eea3108b1b16a44" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.583355 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-5f9wv" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.765324 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-tcnzd"] Nov 23 08:46:58 crc kubenswrapper[4988]: E1123 08:46:58.766088 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89ffe71-9a0e-46e6-b982-02107da4ea39" containerName="install-os-openstack-openstack-cell1" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.766100 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89ffe71-9a0e-46e6-b982-02107da4ea39" containerName="install-os-openstack-openstack-cell1" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.766322 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89ffe71-9a0e-46e6-b982-02107da4ea39" containerName="install-os-openstack-openstack-cell1" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.767061 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.770489 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.770530 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.770598 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.771239 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.780250 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-tcnzd"] Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.781301 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.781351 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6nk\" (UniqueName: \"kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.781481 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.883730 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s6nk\" (UniqueName: \"kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.883886 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.883977 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.889691 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.890765 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:58 crc kubenswrapper[4988]: I1123 08:46:58.902979 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s6nk\" (UniqueName: \"kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk\") pod \"configure-os-openstack-openstack-cell1-tcnzd\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:59 crc kubenswrapper[4988]: I1123 08:46:59.093752 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:46:59 crc kubenswrapper[4988]: I1123 08:46:59.700473 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-tcnzd"] Nov 23 08:47:00 crc kubenswrapper[4988]: I1123 08:47:00.615501 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" event={"ID":"cc698bbf-b5d4-48da-8527-8524710072c3","Type":"ContainerStarted","Data":"d84737fa51f141b81e3c919fc76ce311f25052c90a366cae9240f1bb23c910d7"} Nov 23 08:47:00 crc kubenswrapper[4988]: I1123 08:47:00.616486 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" event={"ID":"cc698bbf-b5d4-48da-8527-8524710072c3","Type":"ContainerStarted","Data":"0e9d31cbf4f7797635c987478e6213fb0ed2b1e32106dfd70983e5037262cefc"} Nov 23 08:47:01 crc kubenswrapper[4988]: I1123 08:47:01.231422 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:47:01 crc kubenswrapper[4988]: I1123 08:47:01.271122 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" podStartSLOduration=2.782659479 podStartE2EDuration="3.271094603s" podCreationTimestamp="2025-11-23 08:46:58 +0000 UTC" firstStartedPulling="2025-11-23 08:46:59.708882707 +0000 UTC m=+7272.017395480" lastFinishedPulling="2025-11-23 08:47:00.197317801 +0000 UTC m=+7272.505830604" observedRunningTime="2025-11-23 08:47:00.640578374 +0000 UTC m=+7272.949091177" watchObservedRunningTime="2025-11-23 08:47:01.271094603 +0000 UTC m=+7273.579607406" Nov 23 08:47:01 crc kubenswrapper[4988]: I1123 08:47:01.306364 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:47:01 crc kubenswrapper[4988]: I1123 08:47:01.472232 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:47:02 crc kubenswrapper[4988]: I1123 08:47:02.651459 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kqwmm" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="registry-server" containerID="cri-o://39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f" gracePeriod=2 Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.203942 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.281576 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content\") pod \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.281641 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbbj\" (UniqueName: \"kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj\") pod \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.281785 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities\") pod \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\" (UID: \"0fbb8739-1773-4af7-aae9-3bdee17a96ce\") " Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.283069 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities" (OuterVolumeSpecName: "utilities") pod "0fbb8739-1773-4af7-aae9-3bdee17a96ce" (UID: "0fbb8739-1773-4af7-aae9-3bdee17a96ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.288069 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj" (OuterVolumeSpecName: "kube-api-access-hrbbj") pod "0fbb8739-1773-4af7-aae9-3bdee17a96ce" (UID: "0fbb8739-1773-4af7-aae9-3bdee17a96ce"). InnerVolumeSpecName "kube-api-access-hrbbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.383070 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fbb8739-1773-4af7-aae9-3bdee17a96ce" (UID: "0fbb8739-1773-4af7-aae9-3bdee17a96ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.384972 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.384998 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbb8739-1773-4af7-aae9-3bdee17a96ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.385011 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrbbj\" (UniqueName: \"kubernetes.io/projected/0fbb8739-1773-4af7-aae9-3bdee17a96ce-kube-api-access-hrbbj\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.667861 4988 generic.go:334] "Generic (PLEG): container finished" podID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerID="39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f" exitCode=0 Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.667946 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kqwmm" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.669225 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerDied","Data":"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f"} Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.669461 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kqwmm" event={"ID":"0fbb8739-1773-4af7-aae9-3bdee17a96ce","Type":"ContainerDied","Data":"2ba089ca3ec446eac1a330e63217f2be325d905a3cf4ee9cca5106d4d6f8a9cb"} Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.669553 4988 scope.go:117] "RemoveContainer" containerID="39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.707269 4988 scope.go:117] "RemoveContainer" containerID="d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.721529 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.732880 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kqwmm"] Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.761719 4988 scope.go:117] "RemoveContainer" containerID="7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.832305 4988 scope.go:117] "RemoveContainer" containerID="39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f" Nov 23 08:47:03 crc kubenswrapper[4988]: E1123 08:47:03.832724 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f\": container with ID starting with 39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f not found: ID does not exist" containerID="39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.832780 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f"} err="failed to get container status \"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f\": rpc error: code = NotFound desc = could not find container \"39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f\": container with ID starting with 39166f62a524557cc7e4cb0566a33185fbf899edad668c7dcdd1809b366af72f not found: ID does not exist" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.832812 4988 scope.go:117] "RemoveContainer" containerID="d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54" Nov 23 08:47:03 crc kubenswrapper[4988]: E1123 08:47:03.833183 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54\": container with ID starting with d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54 not found: ID does not exist" containerID="d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.833220 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54"} err="failed to get container status \"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54\": rpc error: code = NotFound desc = could not find container \"d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54\": container with ID starting with d09f7be95c54927d6a16e2930530e0d317f98e63fa152e1904894c5695b23a54 not found: ID does not exist" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.833258 4988 scope.go:117] "RemoveContainer" containerID="7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c" Nov 23 08:47:03 crc kubenswrapper[4988]: E1123 08:47:03.837602 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c\": container with ID starting with 7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c not found: ID does not exist" containerID="7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c" Nov 23 08:47:03 crc kubenswrapper[4988]: I1123 08:47:03.837634 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c"} err="failed to get container status \"7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c\": rpc error: code = NotFound desc = could not find container \"7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c\": container with ID starting with 7f30440ad1a8116c77444c68929722d029a5e473e2f1c8a80f511d40b70db88c not found: ID does not exist" Nov 23 08:47:04 crc kubenswrapper[4988]: I1123 08:47:04.527234 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" path="/var/lib/kubelet/pods/0fbb8739-1773-4af7-aae9-3bdee17a96ce/volumes" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.672279 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.672892 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.672956 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.674041 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.674152 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" gracePeriod=600 Nov 23 08:47:21 crc kubenswrapper[4988]: E1123 08:47:21.807083 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.859366 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" exitCode=0 Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.859406 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b"} Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.859464 4988 scope.go:117] "RemoveContainer" containerID="3496f7af6b8e40667ee60a91931d14cc8f818d2baad108df4e0225f6f97f195f" Nov 23 08:47:21 crc kubenswrapper[4988]: I1123 08:47:21.860358 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:47:21 crc kubenswrapper[4988]: E1123 08:47:21.860863 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:47:33 crc kubenswrapper[4988]: I1123 08:47:33.496916 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:47:33 crc kubenswrapper[4988]: E1123 08:47:33.498148 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:47:44 crc kubenswrapper[4988]: I1123 08:47:44.496883 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:47:44 crc kubenswrapper[4988]: E1123 08:47:44.497897 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:47:45 crc kubenswrapper[4988]: I1123 08:47:45.125433 4988 generic.go:334] "Generic (PLEG): container finished" podID="cc698bbf-b5d4-48da-8527-8524710072c3" containerID="d84737fa51f141b81e3c919fc76ce311f25052c90a366cae9240f1bb23c910d7" exitCode=0 Nov 23 08:47:45 crc kubenswrapper[4988]: I1123 08:47:45.125474 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" event={"ID":"cc698bbf-b5d4-48da-8527-8524710072c3","Type":"ContainerDied","Data":"d84737fa51f141b81e3c919fc76ce311f25052c90a366cae9240f1bb23c910d7"} Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.601593 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.622939 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s6nk\" (UniqueName: \"kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk\") pod \"cc698bbf-b5d4-48da-8527-8524710072c3\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.623046 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory\") pod \"cc698bbf-b5d4-48da-8527-8524710072c3\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.623127 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key\") pod \"cc698bbf-b5d4-48da-8527-8524710072c3\" (UID: \"cc698bbf-b5d4-48da-8527-8524710072c3\") " Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.628787 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk" (OuterVolumeSpecName: "kube-api-access-8s6nk") pod "cc698bbf-b5d4-48da-8527-8524710072c3" (UID: "cc698bbf-b5d4-48da-8527-8524710072c3"). InnerVolumeSpecName "kube-api-access-8s6nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.658885 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc698bbf-b5d4-48da-8527-8524710072c3" (UID: "cc698bbf-b5d4-48da-8527-8524710072c3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.663798 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory" (OuterVolumeSpecName: "inventory") pod "cc698bbf-b5d4-48da-8527-8524710072c3" (UID: "cc698bbf-b5d4-48da-8527-8524710072c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.725395 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.725441 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s6nk\" (UniqueName: \"kubernetes.io/projected/cc698bbf-b5d4-48da-8527-8524710072c3-kube-api-access-8s6nk\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:46 crc kubenswrapper[4988]: I1123 08:47:46.725463 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc698bbf-b5d4-48da-8527-8524710072c3-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.148108 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" event={"ID":"cc698bbf-b5d4-48da-8527-8524710072c3","Type":"ContainerDied","Data":"0e9d31cbf4f7797635c987478e6213fb0ed2b1e32106dfd70983e5037262cefc"} Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.148148 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9d31cbf4f7797635c987478e6213fb0ed2b1e32106dfd70983e5037262cefc" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.148226 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-tcnzd" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.263973 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-rfdt5"] Nov 23 08:47:47 crc kubenswrapper[4988]: E1123 08:47:47.264451 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="registry-server" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264467 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="registry-server" Nov 23 08:47:47 crc kubenswrapper[4988]: E1123 08:47:47.264482 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc698bbf-b5d4-48da-8527-8524710072c3" containerName="configure-os-openstack-openstack-cell1" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264489 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc698bbf-b5d4-48da-8527-8524710072c3" containerName="configure-os-openstack-openstack-cell1" Nov 23 08:47:47 crc kubenswrapper[4988]: E1123 08:47:47.264507 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="extract-content" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264513 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="extract-content" Nov 23 08:47:47 crc kubenswrapper[4988]: E1123 08:47:47.264541 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="extract-utilities" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264547 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="extract-utilities" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264721 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbb8739-1773-4af7-aae9-3bdee17a96ce" containerName="registry-server" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.264756 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc698bbf-b5d4-48da-8527-8524710072c3" containerName="configure-os-openstack-openstack-cell1" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.265471 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.268011 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.268106 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.268186 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.278081 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.279370 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-rfdt5"] Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.441267 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mgkz\" (UniqueName: \"kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.441428 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.441505 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.544378 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mgkz\" (UniqueName: \"kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.544494 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.544554 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.550919 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.558062 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.569468 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mgkz\" (UniqueName: \"kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz\") pod \"ssh-known-hosts-openstack-rfdt5\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:47 crc kubenswrapper[4988]: I1123 08:47:47.592846 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:47:48 crc kubenswrapper[4988]: I1123 08:47:48.128870 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-rfdt5"] Nov 23 08:47:48 crc kubenswrapper[4988]: I1123 08:47:48.163697 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rfdt5" event={"ID":"46526462-8f88-47f0-a5ab-f4becf600f50","Type":"ContainerStarted","Data":"184ea5043d5bc58aecf4d464f72bd67d55036d5a62f3732145385754883b0100"} Nov 23 08:47:48 crc kubenswrapper[4988]: I1123 08:47:48.619156 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:47:49 crc kubenswrapper[4988]: I1123 08:47:49.181967 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rfdt5" event={"ID":"46526462-8f88-47f0-a5ab-f4becf600f50","Type":"ContainerStarted","Data":"df2b17755ac60c6fe30ab17f178c49932e53a3a6fe70b0ba1718b34d8e7aff57"} Nov 23 08:47:49 crc kubenswrapper[4988]: I1123 08:47:49.211981 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-rfdt5" podStartSLOduration=1.735391741 podStartE2EDuration="2.211963203s" podCreationTimestamp="2025-11-23 08:47:47 +0000 UTC" firstStartedPulling="2025-11-23 08:47:48.139938664 +0000 UTC m=+7320.448451437" lastFinishedPulling="2025-11-23 08:47:48.616510136 +0000 UTC m=+7320.925022899" observedRunningTime="2025-11-23 08:47:49.19802467 +0000 UTC m=+7321.506537513" watchObservedRunningTime="2025-11-23 08:47:49.211963203 +0000 UTC m=+7321.520475966" Nov 23 08:47:55 crc kubenswrapper[4988]: I1123 08:47:55.496900 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:47:55 crc kubenswrapper[4988]: E1123 08:47:55.497671 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:47:59 crc kubenswrapper[4988]: I1123 08:47:59.278643 4988 generic.go:334] "Generic (PLEG): container finished" podID="46526462-8f88-47f0-a5ab-f4becf600f50" containerID="df2b17755ac60c6fe30ab17f178c49932e53a3a6fe70b0ba1718b34d8e7aff57" exitCode=0 Nov 23 08:47:59 crc kubenswrapper[4988]: I1123 08:47:59.279230 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rfdt5" event={"ID":"46526462-8f88-47f0-a5ab-f4becf600f50","Type":"ContainerDied","Data":"df2b17755ac60c6fe30ab17f178c49932e53a3a6fe70b0ba1718b34d8e7aff57"} Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.800456 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.940351 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0\") pod \"46526462-8f88-47f0-a5ab-f4becf600f50\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.940402 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mgkz\" (UniqueName: \"kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz\") pod \"46526462-8f88-47f0-a5ab-f4becf600f50\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.940506 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1\") pod \"46526462-8f88-47f0-a5ab-f4becf600f50\" (UID: \"46526462-8f88-47f0-a5ab-f4becf600f50\") " Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.950469 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz" (OuterVolumeSpecName: "kube-api-access-9mgkz") pod "46526462-8f88-47f0-a5ab-f4becf600f50" (UID: "46526462-8f88-47f0-a5ab-f4becf600f50"). InnerVolumeSpecName "kube-api-access-9mgkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.974530 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "46526462-8f88-47f0-a5ab-f4becf600f50" (UID: "46526462-8f88-47f0-a5ab-f4becf600f50"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:00 crc kubenswrapper[4988]: I1123 08:48:00.975591 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "46526462-8f88-47f0-a5ab-f4becf600f50" (UID: "46526462-8f88-47f0-a5ab-f4becf600f50"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.043542 4988 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.043597 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mgkz\" (UniqueName: \"kubernetes.io/projected/46526462-8f88-47f0-a5ab-f4becf600f50-kube-api-access-9mgkz\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.043618 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/46526462-8f88-47f0-a5ab-f4becf600f50-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.304749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-rfdt5" event={"ID":"46526462-8f88-47f0-a5ab-f4becf600f50","Type":"ContainerDied","Data":"184ea5043d5bc58aecf4d464f72bd67d55036d5a62f3732145385754883b0100"} Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.304811 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="184ea5043d5bc58aecf4d464f72bd67d55036d5a62f3732145385754883b0100" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.305278 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-rfdt5" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.426484 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-vjbjb"] Nov 23 08:48:01 crc kubenswrapper[4988]: E1123 08:48:01.427694 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46526462-8f88-47f0-a5ab-f4becf600f50" containerName="ssh-known-hosts-openstack" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.427724 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="46526462-8f88-47f0-a5ab-f4becf600f50" containerName="ssh-known-hosts-openstack" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.428969 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="46526462-8f88-47f0-a5ab-f4becf600f50" containerName="ssh-known-hosts-openstack" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.430348 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.434970 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.436593 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.436897 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.437132 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.461832 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-vjbjb"] Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.569284 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.569371 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.569621 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45h8\" (UniqueName: \"kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.671836 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.672008 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.672136 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x45h8\" (UniqueName: \"kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.684380 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.685706 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.693295 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x45h8\" (UniqueName: \"kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8\") pod \"run-os-openstack-openstack-cell1-vjbjb\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:01 crc kubenswrapper[4988]: I1123 08:48:01.772954 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:02 crc kubenswrapper[4988]: I1123 08:48:02.407644 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-vjbjb"] Nov 23 08:48:02 crc kubenswrapper[4988]: I1123 08:48:02.414655 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:48:03 crc kubenswrapper[4988]: I1123 08:48:03.336723 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" event={"ID":"1d4bbee0-8089-497c-9b50-d3dedd273cf3","Type":"ContainerStarted","Data":"07e37b4794e550ed31b8ca9c696b9ded218d3e40cdf75373be3fd629e2879123"} Nov 23 08:48:03 crc kubenswrapper[4988]: I1123 08:48:03.337074 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" event={"ID":"1d4bbee0-8089-497c-9b50-d3dedd273cf3","Type":"ContainerStarted","Data":"f601ab957ee669cfced377e292b28b6020921ea8f2c5ca78900f5b20655f6a69"} Nov 23 08:48:03 crc kubenswrapper[4988]: I1123 08:48:03.359998 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" podStartSLOduration=1.892860558 podStartE2EDuration="2.359970228s" podCreationTimestamp="2025-11-23 08:48:01 +0000 UTC" firstStartedPulling="2025-11-23 08:48:02.41440971 +0000 UTC m=+7334.722922463" lastFinishedPulling="2025-11-23 08:48:02.88151937 +0000 UTC m=+7335.190032133" observedRunningTime="2025-11-23 08:48:03.357121728 +0000 UTC m=+7335.665634551" watchObservedRunningTime="2025-11-23 08:48:03.359970228 +0000 UTC m=+7335.668483031" Nov 23 08:48:08 crc kubenswrapper[4988]: I1123 08:48:08.503087 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:48:08 crc kubenswrapper[4988]: E1123 08:48:08.503637 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:48:11 crc kubenswrapper[4988]: I1123 08:48:11.470631 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d4bbee0-8089-497c-9b50-d3dedd273cf3" containerID="07e37b4794e550ed31b8ca9c696b9ded218d3e40cdf75373be3fd629e2879123" exitCode=0 Nov 23 08:48:11 crc kubenswrapper[4988]: I1123 08:48:11.470731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" event={"ID":"1d4bbee0-8089-497c-9b50-d3dedd273cf3","Type":"ContainerDied","Data":"07e37b4794e550ed31b8ca9c696b9ded218d3e40cdf75373be3fd629e2879123"} Nov 23 08:48:12 crc kubenswrapper[4988]: I1123 08:48:12.955879 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.143917 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key\") pod \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.143966 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory\") pod \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.144098 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x45h8\" (UniqueName: \"kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8\") pod \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\" (UID: \"1d4bbee0-8089-497c-9b50-d3dedd273cf3\") " Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.148904 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8" (OuterVolumeSpecName: "kube-api-access-x45h8") pod "1d4bbee0-8089-497c-9b50-d3dedd273cf3" (UID: "1d4bbee0-8089-497c-9b50-d3dedd273cf3"). InnerVolumeSpecName "kube-api-access-x45h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.189430 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory" (OuterVolumeSpecName: "inventory") pod "1d4bbee0-8089-497c-9b50-d3dedd273cf3" (UID: "1d4bbee0-8089-497c-9b50-d3dedd273cf3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.190331 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1d4bbee0-8089-497c-9b50-d3dedd273cf3" (UID: "1d4bbee0-8089-497c-9b50-d3dedd273cf3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.246913 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x45h8\" (UniqueName: \"kubernetes.io/projected/1d4bbee0-8089-497c-9b50-d3dedd273cf3-kube-api-access-x45h8\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.247100 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.247207 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d4bbee0-8089-497c-9b50-d3dedd273cf3-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.492848 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" event={"ID":"1d4bbee0-8089-497c-9b50-d3dedd273cf3","Type":"ContainerDied","Data":"f601ab957ee669cfced377e292b28b6020921ea8f2c5ca78900f5b20655f6a69"} Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.492885 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f601ab957ee669cfced377e292b28b6020921ea8f2c5ca78900f5b20655f6a69" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.492901 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-vjbjb" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.590308 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-6b9qw"] Nov 23 08:48:13 crc kubenswrapper[4988]: E1123 08:48:13.591653 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4bbee0-8089-497c-9b50-d3dedd273cf3" containerName="run-os-openstack-openstack-cell1" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.591696 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4bbee0-8089-497c-9b50-d3dedd273cf3" containerName="run-os-openstack-openstack-cell1" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.592084 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4bbee0-8089-497c-9b50-d3dedd273cf3" containerName="run-os-openstack-openstack-cell1" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.593322 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.595444 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.595842 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.595848 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.595917 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.605081 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-6b9qw"] Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.758406 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.758561 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.758612 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.861101 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.861322 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.861379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.869793 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.875219 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.888182 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5\") pod \"reboot-os-openstack-openstack-cell1-6b9qw\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:13 crc kubenswrapper[4988]: I1123 08:48:13.915673 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:14 crc kubenswrapper[4988]: I1123 08:48:14.483563 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-6b9qw"] Nov 23 08:48:14 crc kubenswrapper[4988]: I1123 08:48:14.530771 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" event={"ID":"89e491db-3451-4397-a5f0-fcaf880606ec","Type":"ContainerStarted","Data":"8e1a5f22fdb48b196b1f3f1095d8eff335f65eaff49581100fb0b6b9e6d5556f"} Nov 23 08:48:15 crc kubenswrapper[4988]: I1123 08:48:15.522953 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" event={"ID":"89e491db-3451-4397-a5f0-fcaf880606ec","Type":"ContainerStarted","Data":"2188603c5d926328f269b04d514b7bc227a88ac52b4bbf620690809f5a2c9d15"} Nov 23 08:48:15 crc kubenswrapper[4988]: I1123 08:48:15.557712 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" podStartSLOduration=2.150492148 podStartE2EDuration="2.557688653s" podCreationTimestamp="2025-11-23 08:48:13 +0000 UTC" firstStartedPulling="2025-11-23 08:48:14.492134704 +0000 UTC m=+7346.800647467" lastFinishedPulling="2025-11-23 08:48:14.899331209 +0000 UTC m=+7347.207843972" observedRunningTime="2025-11-23 08:48:15.546090508 +0000 UTC m=+7347.854603311" watchObservedRunningTime="2025-11-23 08:48:15.557688653 +0000 UTC m=+7347.866201416" Nov 23 08:48:22 crc kubenswrapper[4988]: I1123 08:48:22.499920 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:48:22 crc kubenswrapper[4988]: E1123 08:48:22.500739 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:48:31 crc kubenswrapper[4988]: I1123 08:48:31.682987 4988 generic.go:334] "Generic (PLEG): container finished" podID="89e491db-3451-4397-a5f0-fcaf880606ec" containerID="2188603c5d926328f269b04d514b7bc227a88ac52b4bbf620690809f5a2c9d15" exitCode=0 Nov 23 08:48:31 crc kubenswrapper[4988]: I1123 08:48:31.683057 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" event={"ID":"89e491db-3451-4397-a5f0-fcaf880606ec","Type":"ContainerDied","Data":"2188603c5d926328f269b04d514b7bc227a88ac52b4bbf620690809f5a2c9d15"} Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.124852 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.270472 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5\") pod \"89e491db-3451-4397-a5f0-fcaf880606ec\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.270578 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory\") pod \"89e491db-3451-4397-a5f0-fcaf880606ec\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.270766 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key\") pod \"89e491db-3451-4397-a5f0-fcaf880606ec\" (UID: \"89e491db-3451-4397-a5f0-fcaf880606ec\") " Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.277335 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5" (OuterVolumeSpecName: "kube-api-access-7djb5") pod "89e491db-3451-4397-a5f0-fcaf880606ec" (UID: "89e491db-3451-4397-a5f0-fcaf880606ec"). InnerVolumeSpecName "kube-api-access-7djb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.310918 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory" (OuterVolumeSpecName: "inventory") pod "89e491db-3451-4397-a5f0-fcaf880606ec" (UID: "89e491db-3451-4397-a5f0-fcaf880606ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.337010 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89e491db-3451-4397-a5f0-fcaf880606ec" (UID: "89e491db-3451-4397-a5f0-fcaf880606ec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.373960 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.374034 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djb5\" (UniqueName: \"kubernetes.io/projected/89e491db-3451-4397-a5f0-fcaf880606ec-kube-api-access-7djb5\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.374051 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89e491db-3451-4397-a5f0-fcaf880606ec-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.496067 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:48:33 crc kubenswrapper[4988]: E1123 08:48:33.496320 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.709665 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" event={"ID":"89e491db-3451-4397-a5f0-fcaf880606ec","Type":"ContainerDied","Data":"8e1a5f22fdb48b196b1f3f1095d8eff335f65eaff49581100fb0b6b9e6d5556f"} Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.709726 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1a5f22fdb48b196b1f3f1095d8eff335f65eaff49581100fb0b6b9e6d5556f" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.709807 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-6b9qw" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.788151 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2vl98"] Nov 23 08:48:33 crc kubenswrapper[4988]: E1123 08:48:33.788581 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e491db-3451-4397-a5f0-fcaf880606ec" containerName="reboot-os-openstack-openstack-cell1" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.788600 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e491db-3451-4397-a5f0-fcaf880606ec" containerName="reboot-os-openstack-openstack-cell1" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.788819 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e491db-3451-4397-a5f0-fcaf880606ec" containerName="reboot-os-openstack-openstack-cell1" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.789638 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.792915 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.793279 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.793300 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.793299 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.793475 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.795109 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.795116 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.795155 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.806405 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2vl98"] Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884155 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884178 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884499 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884554 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884600 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p572c\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884629 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884676 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884733 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884854 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884900 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.884996 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.885044 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.885095 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.885122 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987161 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987287 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987382 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p572c\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987433 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987496 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987557 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987696 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987765 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987880 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.987960 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.988043 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.988101 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.988230 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.988292 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.988343 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.992128 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.993006 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.993563 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.993848 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.993914 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.994406 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.994648 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.995856 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.996023 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.996592 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.996894 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:33 crc kubenswrapper[4988]: I1123 08:48:33.999055 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:34 crc kubenswrapper[4988]: I1123 08:48:34.000236 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:34 crc kubenswrapper[4988]: I1123 08:48:34.005252 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:34 crc kubenswrapper[4988]: I1123 08:48:34.009504 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p572c\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c\") pod \"install-certs-openstack-openstack-cell1-2vl98\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:34 crc kubenswrapper[4988]: I1123 08:48:34.109851 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:48:34 crc kubenswrapper[4988]: I1123 08:48:34.804554 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-2vl98"] Nov 23 08:48:35 crc kubenswrapper[4988]: I1123 08:48:35.731007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" event={"ID":"c4a7b919-0961-4e11-9804-30c7c3771ef4","Type":"ContainerStarted","Data":"7db12a477ae8b960073a2038e8d24dba21084be3f1b5b2e576931d5aee2ff533"} Nov 23 08:48:35 crc kubenswrapper[4988]: I1123 08:48:35.731658 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" event={"ID":"c4a7b919-0961-4e11-9804-30c7c3771ef4","Type":"ContainerStarted","Data":"eadd1060486e4b453899f61d747d41dc1346681eab1290cc542dbda1b625158d"} Nov 23 08:48:35 crc kubenswrapper[4988]: I1123 08:48:35.758496 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" podStartSLOduration=2.344598487 podStartE2EDuration="2.758478277s" podCreationTimestamp="2025-11-23 08:48:33 +0000 UTC" firstStartedPulling="2025-11-23 08:48:34.814490728 +0000 UTC m=+7367.123003491" lastFinishedPulling="2025-11-23 08:48:35.228370508 +0000 UTC m=+7367.536883281" observedRunningTime="2025-11-23 08:48:35.748972763 +0000 UTC m=+7368.057485546" watchObservedRunningTime="2025-11-23 08:48:35.758478277 +0000 UTC m=+7368.066991040" Nov 23 08:48:47 crc kubenswrapper[4988]: I1123 08:48:47.497045 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:48:47 crc kubenswrapper[4988]: E1123 08:48:47.497876 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:48:58 crc kubenswrapper[4988]: I1123 08:48:58.501910 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:48:58 crc kubenswrapper[4988]: E1123 08:48:58.502833 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:49:12 crc kubenswrapper[4988]: I1123 08:49:12.496965 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:49:12 crc kubenswrapper[4988]: E1123 08:49:12.497738 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:49:13 crc kubenswrapper[4988]: I1123 08:49:13.155099 4988 generic.go:334] "Generic (PLEG): container finished" podID="c4a7b919-0961-4e11-9804-30c7c3771ef4" containerID="7db12a477ae8b960073a2038e8d24dba21084be3f1b5b2e576931d5aee2ff533" exitCode=0 Nov 23 08:49:13 crc kubenswrapper[4988]: I1123 08:49:13.155207 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" event={"ID":"c4a7b919-0961-4e11-9804-30c7c3771ef4","Type":"ContainerDied","Data":"7db12a477ae8b960073a2038e8d24dba21084be3f1b5b2e576931d5aee2ff533"} Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.632684 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.759909 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.759954 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760054 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760072 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760104 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760173 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760245 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760278 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p572c\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760303 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760339 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760371 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760390 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760412 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760432 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.760511 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle\") pod \"c4a7b919-0961-4e11-9804-30c7c3771ef4\" (UID: \"c4a7b919-0961-4e11-9804-30c7c3771ef4\") " Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.766772 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.767293 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.767903 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.768156 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.768298 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.768614 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.769605 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.769649 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.772428 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.772628 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.773255 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c" (OuterVolumeSpecName: "kube-api-access-p572c") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "kube-api-access-p572c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.774403 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.774946 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.793237 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.803095 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory" (OuterVolumeSpecName: "inventory") pod "c4a7b919-0961-4e11-9804-30c7c3771ef4" (UID: "c4a7b919-0961-4e11-9804-30c7c3771ef4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863235 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863308 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p572c\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-kube-api-access-p572c\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863328 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863350 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863370 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863387 4988 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863405 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863423 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863440 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863457 4988 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863475 4988 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863489 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863505 4988 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863522 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c4a7b919-0961-4e11-9804-30c7c3771ef4-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:14 crc kubenswrapper[4988]: I1123 08:49:14.863538 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4a7b919-0961-4e11-9804-30c7c3771ef4-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.183398 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" event={"ID":"c4a7b919-0961-4e11-9804-30c7c3771ef4","Type":"ContainerDied","Data":"eadd1060486e4b453899f61d747d41dc1346681eab1290cc542dbda1b625158d"} Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.183450 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eadd1060486e4b453899f61d747d41dc1346681eab1290cc542dbda1b625158d" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.183467 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-2vl98" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.386945 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-4skk7"] Nov 23 08:49:15 crc kubenswrapper[4988]: E1123 08:49:15.387613 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a7b919-0961-4e11-9804-30c7c3771ef4" containerName="install-certs-openstack-openstack-cell1" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.387682 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a7b919-0961-4e11-9804-30c7c3771ef4" containerName="install-certs-openstack-openstack-cell1" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.387932 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a7b919-0961-4e11-9804-30c7c3771ef4" containerName="install-certs-openstack-openstack-cell1" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.388817 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.391762 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.392170 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.392421 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.392683 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.394439 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.415261 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-4skk7"] Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.474109 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.474174 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6pqf\" (UniqueName: \"kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.474491 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.474525 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.474588 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.576533 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.576627 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.576704 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.577012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.577934 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6pqf\" (UniqueName: \"kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.578111 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.581318 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.581711 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.585886 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.598800 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6pqf\" (UniqueName: \"kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf\") pod \"ovn-openstack-openstack-cell1-4skk7\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:15 crc kubenswrapper[4988]: I1123 08:49:15.715605 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:49:16 crc kubenswrapper[4988]: I1123 08:49:16.271860 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-4skk7"] Nov 23 08:49:17 crc kubenswrapper[4988]: I1123 08:49:17.206702 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-4skk7" event={"ID":"daa5eecb-492f-418f-a41b-70cb8d86d9fc","Type":"ContainerStarted","Data":"323bbe86e024df4e64f198405daa4fef77dcd1b1cd46c8ae1bc56f86a7b9c3a3"} Nov 23 08:49:17 crc kubenswrapper[4988]: I1123 08:49:17.207030 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-4skk7" event={"ID":"daa5eecb-492f-418f-a41b-70cb8d86d9fc","Type":"ContainerStarted","Data":"a16c453568d642b381df608818a0ac2045d57e18fe4fa10e647e7d8bdc5eaa1b"} Nov 23 08:49:17 crc kubenswrapper[4988]: I1123 08:49:17.227067 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-4skk7" podStartSLOduration=1.728823486 podStartE2EDuration="2.227036221s" podCreationTimestamp="2025-11-23 08:49:15 +0000 UTC" firstStartedPulling="2025-11-23 08:49:16.274508201 +0000 UTC m=+7408.583020964" lastFinishedPulling="2025-11-23 08:49:16.772720896 +0000 UTC m=+7409.081233699" observedRunningTime="2025-11-23 08:49:17.223908834 +0000 UTC m=+7409.532421637" watchObservedRunningTime="2025-11-23 08:49:17.227036221 +0000 UTC m=+7409.535549014" Nov 23 08:49:23 crc kubenswrapper[4988]: I1123 08:49:23.496541 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:49:23 crc kubenswrapper[4988]: E1123 08:49:23.497072 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:49:37 crc kubenswrapper[4988]: I1123 08:49:37.496761 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:49:37 crc kubenswrapper[4988]: E1123 08:49:37.498614 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:49:50 crc kubenswrapper[4988]: I1123 08:49:50.496607 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:49:50 crc kubenswrapper[4988]: E1123 08:49:50.497412 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:50:05 crc kubenswrapper[4988]: I1123 08:50:05.499000 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:50:05 crc kubenswrapper[4988]: E1123 08:50:05.500321 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:50:19 crc kubenswrapper[4988]: I1123 08:50:19.497498 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:50:19 crc kubenswrapper[4988]: E1123 08:50:19.499040 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:50:22 crc kubenswrapper[4988]: I1123 08:50:22.969599 4988 generic.go:334] "Generic (PLEG): container finished" podID="daa5eecb-492f-418f-a41b-70cb8d86d9fc" containerID="323bbe86e024df4e64f198405daa4fef77dcd1b1cd46c8ae1bc56f86a7b9c3a3" exitCode=0 Nov 23 08:50:22 crc kubenswrapper[4988]: I1123 08:50:22.969908 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-4skk7" event={"ID":"daa5eecb-492f-418f-a41b-70cb8d86d9fc","Type":"ContainerDied","Data":"323bbe86e024df4e64f198405daa4fef77dcd1b1cd46c8ae1bc56f86a7b9c3a3"} Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.517091 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.635066 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6pqf\" (UniqueName: \"kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf\") pod \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.635236 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key\") pod \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.635454 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0\") pod \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.635533 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory\") pod \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.635627 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle\") pod \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\" (UID: \"daa5eecb-492f-418f-a41b-70cb8d86d9fc\") " Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.641529 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf" (OuterVolumeSpecName: "kube-api-access-s6pqf") pod "daa5eecb-492f-418f-a41b-70cb8d86d9fc" (UID: "daa5eecb-492f-418f-a41b-70cb8d86d9fc"). InnerVolumeSpecName "kube-api-access-s6pqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.649469 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "daa5eecb-492f-418f-a41b-70cb8d86d9fc" (UID: "daa5eecb-492f-418f-a41b-70cb8d86d9fc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.667401 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "daa5eecb-492f-418f-a41b-70cb8d86d9fc" (UID: "daa5eecb-492f-418f-a41b-70cb8d86d9fc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.667534 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "daa5eecb-492f-418f-a41b-70cb8d86d9fc" (UID: "daa5eecb-492f-418f-a41b-70cb8d86d9fc"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.672371 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory" (OuterVolumeSpecName: "inventory") pod "daa5eecb-492f-418f-a41b-70cb8d86d9fc" (UID: "daa5eecb-492f-418f-a41b-70cb8d86d9fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.740274 4988 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.740330 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.740352 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.740376 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6pqf\" (UniqueName: \"kubernetes.io/projected/daa5eecb-492f-418f-a41b-70cb8d86d9fc-kube-api-access-s6pqf\") on node \"crc\" DevicePath \"\"" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.740397 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daa5eecb-492f-418f-a41b-70cb8d86d9fc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.998689 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-4skk7" event={"ID":"daa5eecb-492f-418f-a41b-70cb8d86d9fc","Type":"ContainerDied","Data":"a16c453568d642b381df608818a0ac2045d57e18fe4fa10e647e7d8bdc5eaa1b"} Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.998984 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16c453568d642b381df608818a0ac2045d57e18fe4fa10e647e7d8bdc5eaa1b" Nov 23 08:50:24 crc kubenswrapper[4988]: I1123 08:50:24.998789 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-4skk7" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.110346 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-t9lcd"] Nov 23 08:50:25 crc kubenswrapper[4988]: E1123 08:50:25.110825 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daa5eecb-492f-418f-a41b-70cb8d86d9fc" containerName="ovn-openstack-openstack-cell1" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.110843 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="daa5eecb-492f-418f-a41b-70cb8d86d9fc" containerName="ovn-openstack-openstack-cell1" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.111059 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="daa5eecb-492f-418f-a41b-70cb8d86d9fc" containerName="ovn-openstack-openstack-cell1" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.111802 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.115398 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.115628 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.115790 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.115898 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.116435 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.116448 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.123013 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-t9lcd"] Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250270 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250316 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250375 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxrw\" (UniqueName: \"kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250466 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.250659 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352726 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352767 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352816 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352864 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxrw\" (UniqueName: \"kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352932 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.352978 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.356620 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.357270 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.358641 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.361166 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.368937 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.373822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxrw\" (UniqueName: \"kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw\") pod \"neutron-metadata-openstack-openstack-cell1-t9lcd\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:25 crc kubenswrapper[4988]: I1123 08:50:25.468375 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:50:26 crc kubenswrapper[4988]: I1123 08:50:26.006871 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-t9lcd"] Nov 23 08:50:27 crc kubenswrapper[4988]: I1123 08:50:27.047722 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" event={"ID":"5d4220c3-c550-45e9-be03-fb88df750921","Type":"ContainerStarted","Data":"6ffacc08c6b5660decf57d63f56e5b5dc5c375b9f35f494e6e51e05a42813ddc"} Nov 23 08:50:27 crc kubenswrapper[4988]: I1123 08:50:27.047992 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" event={"ID":"5d4220c3-c550-45e9-be03-fb88df750921","Type":"ContainerStarted","Data":"a02643d07c012add62823a61bf312d0c1db77d9d71486170053948085b6c2c8a"} Nov 23 08:50:32 crc kubenswrapper[4988]: I1123 08:50:32.496688 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:50:32 crc kubenswrapper[4988]: E1123 08:50:32.497556 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:50:43 crc kubenswrapper[4988]: I1123 08:50:43.495742 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:50:43 crc kubenswrapper[4988]: E1123 08:50:43.496640 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:50:56 crc kubenswrapper[4988]: I1123 08:50:56.496348 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:50:56 crc kubenswrapper[4988]: E1123 08:50:56.497269 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:51:10 crc kubenswrapper[4988]: I1123 08:51:10.496861 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:51:10 crc kubenswrapper[4988]: E1123 08:51:10.497691 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:51:19 crc kubenswrapper[4988]: I1123 08:51:19.635726 4988 generic.go:334] "Generic (PLEG): container finished" podID="5d4220c3-c550-45e9-be03-fb88df750921" containerID="6ffacc08c6b5660decf57d63f56e5b5dc5c375b9f35f494e6e51e05a42813ddc" exitCode=0 Nov 23 08:51:19 crc kubenswrapper[4988]: I1123 08:51:19.636403 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" event={"ID":"5d4220c3-c550-45e9-be03-fb88df750921","Type":"ContainerDied","Data":"6ffacc08c6b5660decf57d63f56e5b5dc5c375b9f35f494e6e51e05a42813ddc"} Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.166022 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.267781 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxrw\" (UniqueName: \"kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.267865 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.268007 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.268050 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.268111 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.268136 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle\") pod \"5d4220c3-c550-45e9-be03-fb88df750921\" (UID: \"5d4220c3-c550-45e9-be03-fb88df750921\") " Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.282559 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.302579 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw" (OuterVolumeSpecName: "kube-api-access-wxxrw") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "kube-api-access-wxxrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.336414 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.336631 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.340835 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.344179 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory" (OuterVolumeSpecName: "inventory") pod "5d4220c3-c550-45e9-be03-fb88df750921" (UID: "5d4220c3-c550-45e9-be03-fb88df750921"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370901 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370942 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370952 4988 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370963 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370976 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxrw\" (UniqueName: \"kubernetes.io/projected/5d4220c3-c550-45e9-be03-fb88df750921-kube-api-access-wxxrw\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.370986 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5d4220c3-c550-45e9-be03-fb88df750921-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.669388 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" event={"ID":"5d4220c3-c550-45e9-be03-fb88df750921","Type":"ContainerDied","Data":"a02643d07c012add62823a61bf312d0c1db77d9d71486170053948085b6c2c8a"} Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.669718 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a02643d07c012add62823a61bf312d0c1db77d9d71486170053948085b6c2c8a" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.669481 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-t9lcd" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.973290 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-8f6xc"] Nov 23 08:51:21 crc kubenswrapper[4988]: E1123 08:51:21.973987 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4220c3-c550-45e9-be03-fb88df750921" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.974011 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4220c3-c550-45e9-be03-fb88df750921" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.974298 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4220c3-c550-45e9-be03-fb88df750921" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.975165 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.978388 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.978902 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.978980 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.979031 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:51:21 crc kubenswrapper[4988]: I1123 08:51:21.979323 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:21.999860 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-8f6xc"] Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.085878 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.085970 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.086069 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmlnk\" (UniqueName: \"kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.086114 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.086378 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.188323 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.188408 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.188477 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmlnk\" (UniqueName: \"kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.188522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.188608 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.192645 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.194990 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.195879 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.196031 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.207285 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmlnk\" (UniqueName: \"kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk\") pod \"libvirt-openstack-openstack-cell1-8f6xc\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.298545 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.496989 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:51:22 crc kubenswrapper[4988]: E1123 08:51:22.497486 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:51:22 crc kubenswrapper[4988]: I1123 08:51:22.854035 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-8f6xc"] Nov 23 08:51:23 crc kubenswrapper[4988]: I1123 08:51:23.696367 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" event={"ID":"9dacc32b-acd1-4160-914d-f3c2dfd68baa","Type":"ContainerStarted","Data":"07b2d05e5363ed31d588314f2eba8ae130422d444106761ba90653d299b26fa7"} Nov 23 08:51:23 crc kubenswrapper[4988]: I1123 08:51:23.697121 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" event={"ID":"9dacc32b-acd1-4160-914d-f3c2dfd68baa","Type":"ContainerStarted","Data":"39d4b673ebf9911419b482e34c8e41c42bb063736d8679a139773fbb752c3b1f"} Nov 23 08:51:23 crc kubenswrapper[4988]: I1123 08:51:23.734051 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" podStartSLOduration=2.327938844 podStartE2EDuration="2.734021113s" podCreationTimestamp="2025-11-23 08:51:21 +0000 UTC" firstStartedPulling="2025-11-23 08:51:22.858738593 +0000 UTC m=+7535.167251356" lastFinishedPulling="2025-11-23 08:51:23.264820862 +0000 UTC m=+7535.573333625" observedRunningTime="2025-11-23 08:51:23.712257178 +0000 UTC m=+7536.020769931" watchObservedRunningTime="2025-11-23 08:51:23.734021113 +0000 UTC m=+7536.042533926" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.653451 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.656526 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.679606 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.804251 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9t5q\" (UniqueName: \"kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.804426 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.804520 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.905985 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.906150 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9t5q\" (UniqueName: \"kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.906487 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.906522 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.906817 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.933229 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9t5q\" (UniqueName: \"kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q\") pod \"certified-operators-9fthg\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:26 crc kubenswrapper[4988]: I1123 08:51:26.981142 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:27 crc kubenswrapper[4988]: I1123 08:51:27.604031 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:27 crc kubenswrapper[4988]: I1123 08:51:27.755251 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerStarted","Data":"e8f58d18c013e8e928adb2d9eebc1e9e4aedbd2647b2b1866ab94895e5de13ea"} Nov 23 08:51:28 crc kubenswrapper[4988]: I1123 08:51:28.773188 4988 generic.go:334] "Generic (PLEG): container finished" podID="54d10bb3-601a-40fe-bb37-20d8be798304" containerID="9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171" exitCode=0 Nov 23 08:51:28 crc kubenswrapper[4988]: I1123 08:51:28.773254 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerDied","Data":"9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171"} Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.052381 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.062818 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.064758 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.155537 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.155592 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.156115 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxh74\" (UniqueName: \"kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.258598 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxh74\" (UniqueName: \"kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.258730 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.258751 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.259251 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.259432 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.282023 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxh74\" (UniqueName: \"kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74\") pod \"redhat-marketplace-jlrcx\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.408789 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.784241 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerStarted","Data":"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e"} Nov 23 08:51:29 crc kubenswrapper[4988]: I1123 08:51:29.882738 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:30 crc kubenswrapper[4988]: I1123 08:51:30.806550 4988 generic.go:334] "Generic (PLEG): container finished" podID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerID="a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a" exitCode=0 Nov 23 08:51:30 crc kubenswrapper[4988]: I1123 08:51:30.809488 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerDied","Data":"a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a"} Nov 23 08:51:30 crc kubenswrapper[4988]: I1123 08:51:30.809560 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerStarted","Data":"4c063dbe7cedf620b33c6f19050c3227dd232b13064b27f274280beda49a56e1"} Nov 23 08:51:31 crc kubenswrapper[4988]: I1123 08:51:31.821393 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerStarted","Data":"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816"} Nov 23 08:51:31 crc kubenswrapper[4988]: I1123 08:51:31.823779 4988 generic.go:334] "Generic (PLEG): container finished" podID="54d10bb3-601a-40fe-bb37-20d8be798304" containerID="7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e" exitCode=0 Nov 23 08:51:31 crc kubenswrapper[4988]: I1123 08:51:31.823820 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerDied","Data":"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e"} Nov 23 08:51:32 crc kubenswrapper[4988]: I1123 08:51:32.836565 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerStarted","Data":"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559"} Nov 23 08:51:32 crc kubenswrapper[4988]: I1123 08:51:32.863966 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9fthg" podStartSLOduration=3.296708037 podStartE2EDuration="6.86394884s" podCreationTimestamp="2025-11-23 08:51:26 +0000 UTC" firstStartedPulling="2025-11-23 08:51:28.777037905 +0000 UTC m=+7541.085550678" lastFinishedPulling="2025-11-23 08:51:32.344278708 +0000 UTC m=+7544.652791481" observedRunningTime="2025-11-23 08:51:32.853255017 +0000 UTC m=+7545.161767810" watchObservedRunningTime="2025-11-23 08:51:32.86394884 +0000 UTC m=+7545.172461603" Nov 23 08:51:33 crc kubenswrapper[4988]: I1123 08:51:33.846272 4988 generic.go:334] "Generic (PLEG): container finished" podID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerID="c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816" exitCode=0 Nov 23 08:51:33 crc kubenswrapper[4988]: I1123 08:51:33.846340 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerDied","Data":"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816"} Nov 23 08:51:34 crc kubenswrapper[4988]: I1123 08:51:34.859867 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerStarted","Data":"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e"} Nov 23 08:51:34 crc kubenswrapper[4988]: I1123 08:51:34.890862 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jlrcx" podStartSLOduration=2.478864302 podStartE2EDuration="5.890838585s" podCreationTimestamp="2025-11-23 08:51:29 +0000 UTC" firstStartedPulling="2025-11-23 08:51:30.823803399 +0000 UTC m=+7543.132316172" lastFinishedPulling="2025-11-23 08:51:34.235777692 +0000 UTC m=+7546.544290455" observedRunningTime="2025-11-23 08:51:34.88658411 +0000 UTC m=+7547.195096973" watchObservedRunningTime="2025-11-23 08:51:34.890838585 +0000 UTC m=+7547.199351368" Nov 23 08:51:36 crc kubenswrapper[4988]: I1123 08:51:36.496230 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:51:36 crc kubenswrapper[4988]: E1123 08:51:36.496878 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:51:36 crc kubenswrapper[4988]: I1123 08:51:36.981939 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:36 crc kubenswrapper[4988]: I1123 08:51:36.982005 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:37 crc kubenswrapper[4988]: I1123 08:51:37.061807 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:37 crc kubenswrapper[4988]: I1123 08:51:37.981833 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:38 crc kubenswrapper[4988]: I1123 08:51:38.234291 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:39 crc kubenswrapper[4988]: I1123 08:51:39.409914 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:39 crc kubenswrapper[4988]: I1123 08:51:39.409957 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:39 crc kubenswrapper[4988]: I1123 08:51:39.469380 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:39 crc kubenswrapper[4988]: I1123 08:51:39.915988 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9fthg" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="registry-server" containerID="cri-o://a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559" gracePeriod=2 Nov 23 08:51:39 crc kubenswrapper[4988]: I1123 08:51:39.966956 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.440740 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.497603 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content\") pod \"54d10bb3-601a-40fe-bb37-20d8be798304\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.497727 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9t5q\" (UniqueName: \"kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q\") pod \"54d10bb3-601a-40fe-bb37-20d8be798304\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.497775 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities\") pod \"54d10bb3-601a-40fe-bb37-20d8be798304\" (UID: \"54d10bb3-601a-40fe-bb37-20d8be798304\") " Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.498963 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities" (OuterVolumeSpecName: "utilities") pod "54d10bb3-601a-40fe-bb37-20d8be798304" (UID: "54d10bb3-601a-40fe-bb37-20d8be798304"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.510541 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q" (OuterVolumeSpecName: "kube-api-access-f9t5q") pod "54d10bb3-601a-40fe-bb37-20d8be798304" (UID: "54d10bb3-601a-40fe-bb37-20d8be798304"). InnerVolumeSpecName "kube-api-access-f9t5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.549395 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54d10bb3-601a-40fe-bb37-20d8be798304" (UID: "54d10bb3-601a-40fe-bb37-20d8be798304"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.600782 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9t5q\" (UniqueName: \"kubernetes.io/projected/54d10bb3-601a-40fe-bb37-20d8be798304-kube-api-access-f9t5q\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.600844 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.600865 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54d10bb3-601a-40fe-bb37-20d8be798304-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.927546 4988 generic.go:334] "Generic (PLEG): container finished" podID="54d10bb3-601a-40fe-bb37-20d8be798304" containerID="a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559" exitCode=0 Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.927612 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9fthg" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.927663 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerDied","Data":"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559"} Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.927723 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9fthg" event={"ID":"54d10bb3-601a-40fe-bb37-20d8be798304","Type":"ContainerDied","Data":"e8f58d18c013e8e928adb2d9eebc1e9e4aedbd2647b2b1866ab94895e5de13ea"} Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.927753 4988 scope.go:117] "RemoveContainer" containerID="a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.969443 4988 scope.go:117] "RemoveContainer" containerID="7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.972241 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.992535 4988 scope.go:117] "RemoveContainer" containerID="9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171" Nov 23 08:51:40 crc kubenswrapper[4988]: I1123 08:51:40.993655 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9fthg"] Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.049738 4988 scope.go:117] "RemoveContainer" containerID="a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559" Nov 23 08:51:41 crc kubenswrapper[4988]: E1123 08:51:41.050506 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559\": container with ID starting with a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559 not found: ID does not exist" containerID="a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.050573 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559"} err="failed to get container status \"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559\": rpc error: code = NotFound desc = could not find container \"a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559\": container with ID starting with a40439a48ed2ec07ee154444e8f41614f18d28010c77e36bc33911ca03894559 not found: ID does not exist" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.050617 4988 scope.go:117] "RemoveContainer" containerID="7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e" Nov 23 08:51:41 crc kubenswrapper[4988]: E1123 08:51:41.050987 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e\": container with ID starting with 7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e not found: ID does not exist" containerID="7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.051023 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e"} err="failed to get container status \"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e\": rpc error: code = NotFound desc = could not find container \"7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e\": container with ID starting with 7af2c352fac90a32a224c43c303f12a47842d51bd54b263acba0c98c0516be3e not found: ID does not exist" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.051056 4988 scope.go:117] "RemoveContainer" containerID="9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171" Nov 23 08:51:41 crc kubenswrapper[4988]: E1123 08:51:41.051496 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171\": container with ID starting with 9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171 not found: ID does not exist" containerID="9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.051538 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171"} err="failed to get container status \"9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171\": rpc error: code = NotFound desc = could not find container \"9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171\": container with ID starting with 9e08440ecc72844825788883235943119b31969e11a72048de4e00b5f1181171 not found: ID does not exist" Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.837120 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:41 crc kubenswrapper[4988]: I1123 08:51:41.941118 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jlrcx" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="registry-server" containerID="cri-o://7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e" gracePeriod=2 Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.511180 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.512143 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" path="/var/lib/kubelet/pods/54d10bb3-601a-40fe-bb37-20d8be798304/volumes" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.649536 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxh74\" (UniqueName: \"kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74\") pod \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.649724 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content\") pod \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.649775 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities\") pod \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\" (UID: \"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41\") " Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.651873 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities" (OuterVolumeSpecName: "utilities") pod "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" (UID: "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.655381 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74" (OuterVolumeSpecName: "kube-api-access-vxh74") pod "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" (UID: "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41"). InnerVolumeSpecName "kube-api-access-vxh74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.669899 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" (UID: "2c79ac2b-0ad9-4f24-a78a-0d22869bbb41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.752790 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxh74\" (UniqueName: \"kubernetes.io/projected/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-kube-api-access-vxh74\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.752868 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.752894 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.955354 4988 generic.go:334] "Generic (PLEG): container finished" podID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerID="7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e" exitCode=0 Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.955395 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerDied","Data":"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e"} Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.955426 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jlrcx" event={"ID":"2c79ac2b-0ad9-4f24-a78a-0d22869bbb41","Type":"ContainerDied","Data":"4c063dbe7cedf620b33c6f19050c3227dd232b13064b27f274280beda49a56e1"} Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.955467 4988 scope.go:117] "RemoveContainer" containerID="7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.955537 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jlrcx" Nov 23 08:51:42 crc kubenswrapper[4988]: I1123 08:51:42.980535 4988 scope.go:117] "RemoveContainer" containerID="c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.004264 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.008790 4988 scope.go:117] "RemoveContainer" containerID="a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.023245 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jlrcx"] Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.061270 4988 scope.go:117] "RemoveContainer" containerID="7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e" Nov 23 08:51:43 crc kubenswrapper[4988]: E1123 08:51:43.063164 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e\": container with ID starting with 7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e not found: ID does not exist" containerID="7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.063230 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e"} err="failed to get container status \"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e\": rpc error: code = NotFound desc = could not find container \"7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e\": container with ID starting with 7dbf987b0f38af16529ce512d87c9f8d7305047d2d7cc6b608d4b3686ecc355e not found: ID does not exist" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.063263 4988 scope.go:117] "RemoveContainer" containerID="c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816" Nov 23 08:51:43 crc kubenswrapper[4988]: E1123 08:51:43.063944 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816\": container with ID starting with c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816 not found: ID does not exist" containerID="c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.063974 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816"} err="failed to get container status \"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816\": rpc error: code = NotFound desc = could not find container \"c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816\": container with ID starting with c7c2cfb23343115767770d75c3f7ba9ddf83308d518917fd28800f4fa6382816 not found: ID does not exist" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.063992 4988 scope.go:117] "RemoveContainer" containerID="a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a" Nov 23 08:51:43 crc kubenswrapper[4988]: E1123 08:51:43.064356 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a\": container with ID starting with a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a not found: ID does not exist" containerID="a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a" Nov 23 08:51:43 crc kubenswrapper[4988]: I1123 08:51:43.064385 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a"} err="failed to get container status \"a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a\": rpc error: code = NotFound desc = could not find container \"a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a\": container with ID starting with a8a39173dbfbc796d51310999bcc3d5cb04833e02d8aab66548fe64753acbf8a not found: ID does not exist" Nov 23 08:51:44 crc kubenswrapper[4988]: I1123 08:51:44.513855 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" path="/var/lib/kubelet/pods/2c79ac2b-0ad9-4f24-a78a-0d22869bbb41/volumes" Nov 23 08:51:50 crc kubenswrapper[4988]: I1123 08:51:50.497345 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:51:50 crc kubenswrapper[4988]: E1123 08:51:50.498502 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:52:05 crc kubenswrapper[4988]: I1123 08:52:05.496868 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:52:05 crc kubenswrapper[4988]: E1123 08:52:05.497941 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:52:16 crc kubenswrapper[4988]: I1123 08:52:16.502444 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:52:16 crc kubenswrapper[4988]: E1123 08:52:16.518740 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:52:27 crc kubenswrapper[4988]: I1123 08:52:27.496548 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:52:28 crc kubenswrapper[4988]: I1123 08:52:28.450861 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed"} Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.196343 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197464 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197488 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197523 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="extract-utilities" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197535 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="extract-utilities" Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197550 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="extract-utilities" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197561 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="extract-utilities" Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197592 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197603 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197629 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="extract-content" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197637 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="extract-content" Nov 23 08:53:27 crc kubenswrapper[4988]: E1123 08:53:27.197653 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="extract-content" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.197661 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="extract-content" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.198078 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d10bb3-601a-40fe-bb37-20d8be798304" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.198097 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c79ac2b-0ad9-4f24-a78a-0d22869bbb41" containerName="registry-server" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.200112 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.224510 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.297397 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.297621 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.297745 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qgh\" (UniqueName: \"kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.399954 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.400032 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qgh\" (UniqueName: \"kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.400177 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.400775 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.401073 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.427382 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qgh\" (UniqueName: \"kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh\") pod \"community-operators-w4r9s\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:27 crc kubenswrapper[4988]: I1123 08:53:27.543114 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:28 crc kubenswrapper[4988]: I1123 08:53:28.065479 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:28 crc kubenswrapper[4988]: I1123 08:53:28.116522 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerStarted","Data":"448803d65adfbd70b6f28899eb93bb217a2770a56ba7a2439739bba317974e0c"} Nov 23 08:53:29 crc kubenswrapper[4988]: I1123 08:53:29.134314 4988 generic.go:334] "Generic (PLEG): container finished" podID="901028e7-824c-49fe-8256-10f0576029e7" containerID="c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706" exitCode=0 Nov 23 08:53:29 crc kubenswrapper[4988]: I1123 08:53:29.134759 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerDied","Data":"c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706"} Nov 23 08:53:29 crc kubenswrapper[4988]: I1123 08:53:29.138353 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:53:30 crc kubenswrapper[4988]: I1123 08:53:30.148783 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerStarted","Data":"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d"} Nov 23 08:53:32 crc kubenswrapper[4988]: I1123 08:53:32.170000 4988 generic.go:334] "Generic (PLEG): container finished" podID="901028e7-824c-49fe-8256-10f0576029e7" containerID="aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d" exitCode=0 Nov 23 08:53:32 crc kubenswrapper[4988]: I1123 08:53:32.170116 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerDied","Data":"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d"} Nov 23 08:53:33 crc kubenswrapper[4988]: I1123 08:53:33.185064 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerStarted","Data":"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a"} Nov 23 08:53:33 crc kubenswrapper[4988]: I1123 08:53:33.210614 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w4r9s" podStartSLOduration=2.771242836 podStartE2EDuration="6.210595416s" podCreationTimestamp="2025-11-23 08:53:27 +0000 UTC" firstStartedPulling="2025-11-23 08:53:29.137684111 +0000 UTC m=+7661.446196914" lastFinishedPulling="2025-11-23 08:53:32.577036731 +0000 UTC m=+7664.885549494" observedRunningTime="2025-11-23 08:53:33.201208904 +0000 UTC m=+7665.509721677" watchObservedRunningTime="2025-11-23 08:53:33.210595416 +0000 UTC m=+7665.519108189" Nov 23 08:53:37 crc kubenswrapper[4988]: I1123 08:53:37.543334 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:37 crc kubenswrapper[4988]: I1123 08:53:37.543783 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:37 crc kubenswrapper[4988]: I1123 08:53:37.600547 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:38 crc kubenswrapper[4988]: I1123 08:53:38.290412 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:38 crc kubenswrapper[4988]: I1123 08:53:38.348667 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.257289 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w4r9s" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="registry-server" containerID="cri-o://4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a" gracePeriod=2 Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.786315 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.864918 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities\") pod \"901028e7-824c-49fe-8256-10f0576029e7\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.865130 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65qgh\" (UniqueName: \"kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh\") pod \"901028e7-824c-49fe-8256-10f0576029e7\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.865283 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content\") pod \"901028e7-824c-49fe-8256-10f0576029e7\" (UID: \"901028e7-824c-49fe-8256-10f0576029e7\") " Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.865790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities" (OuterVolumeSpecName: "utilities") pod "901028e7-824c-49fe-8256-10f0576029e7" (UID: "901028e7-824c-49fe-8256-10f0576029e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.865913 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.871330 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh" (OuterVolumeSpecName: "kube-api-access-65qgh") pod "901028e7-824c-49fe-8256-10f0576029e7" (UID: "901028e7-824c-49fe-8256-10f0576029e7"). InnerVolumeSpecName "kube-api-access-65qgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.916863 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901028e7-824c-49fe-8256-10f0576029e7" (UID: "901028e7-824c-49fe-8256-10f0576029e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.968534 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65qgh\" (UniqueName: \"kubernetes.io/projected/901028e7-824c-49fe-8256-10f0576029e7-kube-api-access-65qgh\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:40 crc kubenswrapper[4988]: I1123 08:53:40.968857 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901028e7-824c-49fe-8256-10f0576029e7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.274932 4988 generic.go:334] "Generic (PLEG): container finished" podID="901028e7-824c-49fe-8256-10f0576029e7" containerID="4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a" exitCode=0 Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.274998 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerDied","Data":"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a"} Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.275076 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w4r9s" event={"ID":"901028e7-824c-49fe-8256-10f0576029e7","Type":"ContainerDied","Data":"448803d65adfbd70b6f28899eb93bb217a2770a56ba7a2439739bba317974e0c"} Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.275106 4988 scope.go:117] "RemoveContainer" containerID="4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.275445 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w4r9s" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.302092 4988 scope.go:117] "RemoveContainer" containerID="aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.329157 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.342009 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w4r9s"] Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.363571 4988 scope.go:117] "RemoveContainer" containerID="c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.403793 4988 scope.go:117] "RemoveContainer" containerID="4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a" Nov 23 08:53:41 crc kubenswrapper[4988]: E1123 08:53:41.404513 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a\": container with ID starting with 4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a not found: ID does not exist" containerID="4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.404596 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a"} err="failed to get container status \"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a\": rpc error: code = NotFound desc = could not find container \"4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a\": container with ID starting with 4038632f5d0ab8b2bb56a1b27c01753e0f891227cfdd846c88f342bc52fa813a not found: ID does not exist" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.404760 4988 scope.go:117] "RemoveContainer" containerID="aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d" Nov 23 08:53:41 crc kubenswrapper[4988]: E1123 08:53:41.405569 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d\": container with ID starting with aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d not found: ID does not exist" containerID="aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.405705 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d"} err="failed to get container status \"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d\": rpc error: code = NotFound desc = could not find container \"aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d\": container with ID starting with aef7c297a507d4fba11e65f16396f16b5525078b9242d925f671ffec355e485d not found: ID does not exist" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.405903 4988 scope.go:117] "RemoveContainer" containerID="c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706" Nov 23 08:53:41 crc kubenswrapper[4988]: E1123 08:53:41.406556 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706\": container with ID starting with c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706 not found: ID does not exist" containerID="c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706" Nov 23 08:53:41 crc kubenswrapper[4988]: I1123 08:53:41.407005 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706"} err="failed to get container status \"c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706\": rpc error: code = NotFound desc = could not find container \"c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706\": container with ID starting with c8271f0978ba8d20fc8295589367e263e3ad7fbf376cf6da4bfd97c1452a4706 not found: ID does not exist" Nov 23 08:53:42 crc kubenswrapper[4988]: I1123 08:53:42.519138 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901028e7-824c-49fe-8256-10f0576029e7" path="/var/lib/kubelet/pods/901028e7-824c-49fe-8256-10f0576029e7/volumes" Nov 23 08:54:51 crc kubenswrapper[4988]: I1123 08:54:51.672570 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:54:51 crc kubenswrapper[4988]: I1123 08:54:51.673168 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:55:21 crc kubenswrapper[4988]: I1123 08:55:21.672492 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:55:21 crc kubenswrapper[4988]: I1123 08:55:21.673160 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.672844 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.673365 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.673422 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.674101 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.674152 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed" gracePeriod=600 Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.816679 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed" exitCode=0 Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.816751 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed"} Nov 23 08:55:51 crc kubenswrapper[4988]: I1123 08:55:51.817118 4988 scope.go:117] "RemoveContainer" containerID="b5edf74719e256e0740acbd37e82ebce43249547a9dadacf2f18ac7c59b1868b" Nov 23 08:55:52 crc kubenswrapper[4988]: I1123 08:55:52.828005 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e"} Nov 23 08:55:59 crc kubenswrapper[4988]: I1123 08:55:59.913184 4988 generic.go:334] "Generic (PLEG): container finished" podID="9dacc32b-acd1-4160-914d-f3c2dfd68baa" containerID="07b2d05e5363ed31d588314f2eba8ae130422d444106761ba90653d299b26fa7" exitCode=0 Nov 23 08:55:59 crc kubenswrapper[4988]: I1123 08:55:59.913335 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" event={"ID":"9dacc32b-acd1-4160-914d-f3c2dfd68baa","Type":"ContainerDied","Data":"07b2d05e5363ed31d588314f2eba8ae130422d444106761ba90653d299b26fa7"} Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.397955 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.484177 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0\") pod \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.485245 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key\") pod \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.485320 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle\") pod \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.485427 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory\") pod \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.485600 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmlnk\" (UniqueName: \"kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk\") pod \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\" (UID: \"9dacc32b-acd1-4160-914d-f3c2dfd68baa\") " Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.490344 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk" (OuterVolumeSpecName: "kube-api-access-jmlnk") pod "9dacc32b-acd1-4160-914d-f3c2dfd68baa" (UID: "9dacc32b-acd1-4160-914d-f3c2dfd68baa"). InnerVolumeSpecName "kube-api-access-jmlnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.490921 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9dacc32b-acd1-4160-914d-f3c2dfd68baa" (UID: "9dacc32b-acd1-4160-914d-f3c2dfd68baa"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.518529 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9dacc32b-acd1-4160-914d-f3c2dfd68baa" (UID: "9dacc32b-acd1-4160-914d-f3c2dfd68baa"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.521342 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory" (OuterVolumeSpecName: "inventory") pod "9dacc32b-acd1-4160-914d-f3c2dfd68baa" (UID: "9dacc32b-acd1-4160-914d-f3c2dfd68baa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.523121 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9dacc32b-acd1-4160-914d-f3c2dfd68baa" (UID: "9dacc32b-acd1-4160-914d-f3c2dfd68baa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.588053 4988 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.588082 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.588092 4988 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.588102 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9dacc32b-acd1-4160-914d-f3c2dfd68baa-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.588111 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmlnk\" (UniqueName: \"kubernetes.io/projected/9dacc32b-acd1-4160-914d-f3c2dfd68baa-kube-api-access-jmlnk\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.939943 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" event={"ID":"9dacc32b-acd1-4160-914d-f3c2dfd68baa","Type":"ContainerDied","Data":"39d4b673ebf9911419b482e34c8e41c42bb063736d8679a139773fbb752c3b1f"} Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.939983 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d4b673ebf9911419b482e34c8e41c42bb063736d8679a139773fbb752c3b1f" Nov 23 08:56:01 crc kubenswrapper[4988]: I1123 08:56:01.940048 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-8f6xc" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.063589 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-c8j5z"] Nov 23 08:56:02 crc kubenswrapper[4988]: E1123 08:56:02.063993 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="extract-utilities" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064010 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="extract-utilities" Nov 23 08:56:02 crc kubenswrapper[4988]: E1123 08:56:02.064032 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="registry-server" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064038 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="registry-server" Nov 23 08:56:02 crc kubenswrapper[4988]: E1123 08:56:02.064047 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="extract-content" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064056 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="extract-content" Nov 23 08:56:02 crc kubenswrapper[4988]: E1123 08:56:02.064069 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dacc32b-acd1-4160-914d-f3c2dfd68baa" containerName="libvirt-openstack-openstack-cell1" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064076 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dacc32b-acd1-4160-914d-f3c2dfd68baa" containerName="libvirt-openstack-openstack-cell1" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064286 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dacc32b-acd1-4160-914d-f3c2dfd68baa" containerName="libvirt-openstack-openstack-cell1" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064305 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="901028e7-824c-49fe-8256-10f0576029e7" containerName="registry-server" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.064964 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.067302 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.067824 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.068070 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.068227 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.068920 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.069965 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.070400 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.095236 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-c8j5z"] Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.102825 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.102903 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.102964 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103054 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103122 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103163 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103239 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103424 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2scks\" (UniqueName: \"kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.103481 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205109 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205180 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205269 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205595 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2scks\" (UniqueName: \"kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205645 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205759 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205792 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205849 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.205898 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.206681 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.211881 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.211998 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.212259 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.212505 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.223138 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.226154 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.226840 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.230187 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2scks\" (UniqueName: \"kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks\") pod \"nova-cell1-openstack-openstack-cell1-c8j5z\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.393534 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:56:02 crc kubenswrapper[4988]: I1123 08:56:02.965009 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-c8j5z"] Nov 23 08:56:03 crc kubenswrapper[4988]: I1123 08:56:03.967163 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" event={"ID":"c80e8bf1-ba39-4578-9aaf-500df71fe1a2","Type":"ContainerStarted","Data":"cedc8692c063c47a6d44f8b579ecc4621f255acb1639ae026a25b60bdb492153"} Nov 23 08:56:03 crc kubenswrapper[4988]: I1123 08:56:03.967762 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" event={"ID":"c80e8bf1-ba39-4578-9aaf-500df71fe1a2","Type":"ContainerStarted","Data":"82677134b0663308fce64e5fa58cccb42e673fb3e4fe2b6dd2aaf4058dc9046c"} Nov 23 08:56:03 crc kubenswrapper[4988]: I1123 08:56:03.983313 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" podStartSLOduration=1.494950754 podStartE2EDuration="1.983296538s" podCreationTimestamp="2025-11-23 08:56:02 +0000 UTC" firstStartedPulling="2025-11-23 08:56:02.978422086 +0000 UTC m=+7815.286934849" lastFinishedPulling="2025-11-23 08:56:03.46676787 +0000 UTC m=+7815.775280633" observedRunningTime="2025-11-23 08:56:03.982024637 +0000 UTC m=+7816.290537400" watchObservedRunningTime="2025-11-23 08:56:03.983296538 +0000 UTC m=+7816.291809301" Nov 23 08:57:24 crc kubenswrapper[4988]: I1123 08:57:24.949331 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:24 crc kubenswrapper[4988]: I1123 08:57:24.952198 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:24 crc kubenswrapper[4988]: I1123 08:57:24.976322 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.047912 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.048046 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4rl\" (UniqueName: \"kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.048161 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.149597 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.149703 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4rl\" (UniqueName: \"kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.149767 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.150434 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.150602 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.178278 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4rl\" (UniqueName: \"kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl\") pod \"redhat-operators-cd92r\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.298449 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.756994 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:25 crc kubenswrapper[4988]: W1123 08:57:25.766184 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd934f28d_bab3_4c5d_b081_93fb64e69db0.slice/crio-71f2bc0b1d822fbb09202d9ed22f137e6ae009218d000976375e32d60d039b89 WatchSource:0}: Error finding container 71f2bc0b1d822fbb09202d9ed22f137e6ae009218d000976375e32d60d039b89: Status 404 returned error can't find the container with id 71f2bc0b1d822fbb09202d9ed22f137e6ae009218d000976375e32d60d039b89 Nov 23 08:57:25 crc kubenswrapper[4988]: I1123 08:57:25.876731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerStarted","Data":"71f2bc0b1d822fbb09202d9ed22f137e6ae009218d000976375e32d60d039b89"} Nov 23 08:57:26 crc kubenswrapper[4988]: I1123 08:57:26.887531 4988 generic.go:334] "Generic (PLEG): container finished" podID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerID="b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517" exitCode=0 Nov 23 08:57:26 crc kubenswrapper[4988]: I1123 08:57:26.887596 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerDied","Data":"b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517"} Nov 23 08:57:27 crc kubenswrapper[4988]: I1123 08:57:27.904185 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerStarted","Data":"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3"} Nov 23 08:57:33 crc kubenswrapper[4988]: I1123 08:57:33.965168 4988 generic.go:334] "Generic (PLEG): container finished" podID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerID="a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3" exitCode=0 Nov 23 08:57:33 crc kubenswrapper[4988]: I1123 08:57:33.965302 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerDied","Data":"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3"} Nov 23 08:57:35 crc kubenswrapper[4988]: I1123 08:57:35.989973 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerStarted","Data":"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a"} Nov 23 08:57:36 crc kubenswrapper[4988]: I1123 08:57:36.021738 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cd92r" podStartSLOduration=4.060683288 podStartE2EDuration="12.02171791s" podCreationTimestamp="2025-11-23 08:57:24 +0000 UTC" firstStartedPulling="2025-11-23 08:57:26.889530746 +0000 UTC m=+7899.198043509" lastFinishedPulling="2025-11-23 08:57:34.850565338 +0000 UTC m=+7907.159078131" observedRunningTime="2025-11-23 08:57:36.009565911 +0000 UTC m=+7908.318078714" watchObservedRunningTime="2025-11-23 08:57:36.02171791 +0000 UTC m=+7908.330230683" Nov 23 08:57:45 crc kubenswrapper[4988]: I1123 08:57:45.298807 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:45 crc kubenswrapper[4988]: I1123 08:57:45.299467 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:46 crc kubenswrapper[4988]: I1123 08:57:46.352301 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cd92r" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="registry-server" probeResult="failure" output=< Nov 23 08:57:46 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 08:57:46 crc kubenswrapper[4988]: > Nov 23 08:57:55 crc kubenswrapper[4988]: I1123 08:57:55.361663 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:55 crc kubenswrapper[4988]: I1123 08:57:55.420390 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:56 crc kubenswrapper[4988]: I1123 08:57:56.153966 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:57 crc kubenswrapper[4988]: I1123 08:57:57.234374 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cd92r" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="registry-server" containerID="cri-o://7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a" gracePeriod=2 Nov 23 08:57:57 crc kubenswrapper[4988]: I1123 08:57:57.834749 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.016946 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content\") pod \"d934f28d-bab3-4c5d-b081-93fb64e69db0\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.017060 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4rl\" (UniqueName: \"kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl\") pod \"d934f28d-bab3-4c5d-b081-93fb64e69db0\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.017157 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities\") pod \"d934f28d-bab3-4c5d-b081-93fb64e69db0\" (UID: \"d934f28d-bab3-4c5d-b081-93fb64e69db0\") " Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.018160 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities" (OuterVolumeSpecName: "utilities") pod "d934f28d-bab3-4c5d-b081-93fb64e69db0" (UID: "d934f28d-bab3-4c5d-b081-93fb64e69db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.024323 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl" (OuterVolumeSpecName: "kube-api-access-2d4rl") pod "d934f28d-bab3-4c5d-b081-93fb64e69db0" (UID: "d934f28d-bab3-4c5d-b081-93fb64e69db0"). InnerVolumeSpecName "kube-api-access-2d4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.119541 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.119579 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4rl\" (UniqueName: \"kubernetes.io/projected/d934f28d-bab3-4c5d-b081-93fb64e69db0-kube-api-access-2d4rl\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.126111 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d934f28d-bab3-4c5d-b081-93fb64e69db0" (UID: "d934f28d-bab3-4c5d-b081-93fb64e69db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.223067 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d934f28d-bab3-4c5d-b081-93fb64e69db0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.250558 4988 generic.go:334] "Generic (PLEG): container finished" podID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerID="7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a" exitCode=0 Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.250615 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerDied","Data":"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a"} Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.250645 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cd92r" event={"ID":"d934f28d-bab3-4c5d-b081-93fb64e69db0","Type":"ContainerDied","Data":"71f2bc0b1d822fbb09202d9ed22f137e6ae009218d000976375e32d60d039b89"} Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.250641 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cd92r" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.250663 4988 scope.go:117] "RemoveContainer" containerID="7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.279231 4988 scope.go:117] "RemoveContainer" containerID="a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.285664 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.297863 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cd92r"] Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.317172 4988 scope.go:117] "RemoveContainer" containerID="b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.361915 4988 scope.go:117] "RemoveContainer" containerID="7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a" Nov 23 08:57:58 crc kubenswrapper[4988]: E1123 08:57:58.371052 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a\": container with ID starting with 7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a not found: ID does not exist" containerID="7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.371102 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a"} err="failed to get container status \"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a\": rpc error: code = NotFound desc = could not find container \"7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a\": container with ID starting with 7a620d2352190ac3e994c236c4466cdb8bf123f66edeea12c9db6d054773d20a not found: ID does not exist" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.371132 4988 scope.go:117] "RemoveContainer" containerID="a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3" Nov 23 08:57:58 crc kubenswrapper[4988]: E1123 08:57:58.371606 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3\": container with ID starting with a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3 not found: ID does not exist" containerID="a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.371648 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3"} err="failed to get container status \"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3\": rpc error: code = NotFound desc = could not find container \"a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3\": container with ID starting with a9f1a61691fef00b76519d58b9fd22abd9e15842e78ea08d08760fad3ad53ab3 not found: ID does not exist" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.371682 4988 scope.go:117] "RemoveContainer" containerID="b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517" Nov 23 08:57:58 crc kubenswrapper[4988]: E1123 08:57:58.371973 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517\": container with ID starting with b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517 not found: ID does not exist" containerID="b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.371993 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517"} err="failed to get container status \"b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517\": rpc error: code = NotFound desc = could not find container \"b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517\": container with ID starting with b5d04de2a4474fa089a540f0fe226b4b6cb50944c6e44b8f2c541f71a5a6c517 not found: ID does not exist" Nov 23 08:57:58 crc kubenswrapper[4988]: I1123 08:57:58.508211 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" path="/var/lib/kubelet/pods/d934f28d-bab3-4c5d-b081-93fb64e69db0/volumes" Nov 23 08:58:21 crc kubenswrapper[4988]: I1123 08:58:21.672625 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:58:21 crc kubenswrapper[4988]: I1123 08:58:21.673462 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:58:51 crc kubenswrapper[4988]: I1123 08:58:51.672184 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:58:51 crc kubenswrapper[4988]: I1123 08:58:51.672775 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:59:11 crc kubenswrapper[4988]: I1123 08:59:11.107687 4988 generic.go:334] "Generic (PLEG): container finished" podID="c80e8bf1-ba39-4578-9aaf-500df71fe1a2" containerID="cedc8692c063c47a6d44f8b579ecc4621f255acb1639ae026a25b60bdb492153" exitCode=0 Nov 23 08:59:11 crc kubenswrapper[4988]: I1123 08:59:11.107798 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" event={"ID":"c80e8bf1-ba39-4578-9aaf-500df71fe1a2","Type":"ContainerDied","Data":"cedc8692c063c47a6d44f8b579ecc4621f255acb1639ae026a25b60bdb492153"} Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.549131 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.570214 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2scks\" (UniqueName: \"kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.575808 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks" (OuterVolumeSpecName: "kube-api-access-2scks") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "kube-api-access-2scks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.671697 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.671820 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.671907 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.672007 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.672052 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.672087 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.672167 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.672461 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0\") pod \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\" (UID: \"c80e8bf1-ba39-4578-9aaf-500df71fe1a2\") " Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.673019 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2scks\" (UniqueName: \"kubernetes.io/projected/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-kube-api-access-2scks\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.675529 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.698848 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.702776 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.705964 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.707528 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.707790 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.714698 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory" (OuterVolumeSpecName: "inventory") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.729977 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c80e8bf1-ba39-4578-9aaf-500df71fe1a2" (UID: "c80e8bf1-ba39-4578-9aaf-500df71fe1a2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775056 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775100 4988 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775114 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775126 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775139 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775152 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775164 4988 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:12 crc kubenswrapper[4988]: I1123 08:59:12.775176 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80e8bf1-ba39-4578-9aaf-500df71fe1a2-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.130100 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" event={"ID":"c80e8bf1-ba39-4578-9aaf-500df71fe1a2","Type":"ContainerDied","Data":"82677134b0663308fce64e5fa58cccb42e673fb3e4fe2b6dd2aaf4058dc9046c"} Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.130136 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82677134b0663308fce64e5fa58cccb42e673fb3e4fe2b6dd2aaf4058dc9046c" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.130189 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-c8j5z" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.248712 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-67jsk"] Nov 23 08:59:13 crc kubenswrapper[4988]: E1123 08:59:13.249109 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80e8bf1-ba39-4578-9aaf-500df71fe1a2" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249123 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80e8bf1-ba39-4578-9aaf-500df71fe1a2" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 08:59:13 crc kubenswrapper[4988]: E1123 08:59:13.249136 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="extract-utilities" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249142 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="extract-utilities" Nov 23 08:59:13 crc kubenswrapper[4988]: E1123 08:59:13.249169 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="registry-server" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249175 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="registry-server" Nov 23 08:59:13 crc kubenswrapper[4988]: E1123 08:59:13.249220 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="extract-content" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249228 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="extract-content" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249408 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80e8bf1-ba39-4578-9aaf-500df71fe1a2" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.249430 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d934f28d-bab3-4c5d-b081-93fb64e69db0" containerName="registry-server" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.250111 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.252185 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.252424 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.252528 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.252598 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.252653 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.274439 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-67jsk"] Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.285495 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.285547 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.285646 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbk2\" (UniqueName: \"kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.285683 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.285704 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.286056 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.286103 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388603 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388649 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388710 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbk2\" (UniqueName: \"kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388736 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388760 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388845 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.388865 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.394107 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.394256 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.394365 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.395002 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.397039 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.398585 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.404828 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbk2\" (UniqueName: \"kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2\") pod \"telemetry-openstack-openstack-cell1-67jsk\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:13 crc kubenswrapper[4988]: I1123 08:59:13.566960 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 08:59:14 crc kubenswrapper[4988]: I1123 08:59:14.197102 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-67jsk"] Nov 23 08:59:14 crc kubenswrapper[4988]: I1123 08:59:14.209573 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:59:15 crc kubenswrapper[4988]: I1123 08:59:15.151504 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" event={"ID":"34592fcb-7601-4940-a40a-3fc5de6c9d01","Type":"ContainerStarted","Data":"8f03bc1bfc5f548981ddd406dfa12afb7d4ef6c7c670afcbc8d895a4ae22a1bf"} Nov 23 08:59:15 crc kubenswrapper[4988]: I1123 08:59:15.151740 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" event={"ID":"34592fcb-7601-4940-a40a-3fc5de6c9d01","Type":"ContainerStarted","Data":"834257ab67e841631da52cbcfdf2a22d39937a369ffe66e1e3b64958c21033b6"} Nov 23 08:59:15 crc kubenswrapper[4988]: I1123 08:59:15.174840 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" podStartSLOduration=1.6736438169999999 podStartE2EDuration="2.174818417s" podCreationTimestamp="2025-11-23 08:59:13 +0000 UTC" firstStartedPulling="2025-11-23 08:59:14.208927067 +0000 UTC m=+8006.517439820" lastFinishedPulling="2025-11-23 08:59:14.710101657 +0000 UTC m=+8007.018614420" observedRunningTime="2025-11-23 08:59:15.169388474 +0000 UTC m=+8007.477901237" watchObservedRunningTime="2025-11-23 08:59:15.174818417 +0000 UTC m=+8007.483331180" Nov 23 08:59:21 crc kubenswrapper[4988]: I1123 08:59:21.672451 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:59:21 crc kubenswrapper[4988]: I1123 08:59:21.673224 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:59:21 crc kubenswrapper[4988]: I1123 08:59:21.673281 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 08:59:21 crc kubenswrapper[4988]: I1123 08:59:21.674260 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:59:21 crc kubenswrapper[4988]: I1123 08:59:21.674333 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" gracePeriod=600 Nov 23 08:59:21 crc kubenswrapper[4988]: E1123 08:59:21.808508 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:59:22 crc kubenswrapper[4988]: I1123 08:59:22.221650 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" exitCode=0 Nov 23 08:59:22 crc kubenswrapper[4988]: I1123 08:59:22.221692 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e"} Nov 23 08:59:22 crc kubenswrapper[4988]: I1123 08:59:22.221724 4988 scope.go:117] "RemoveContainer" containerID="3bd0b03093d4e5730e6152df32a6d625ce0d59bc9d4baa74c74cf577290966ed" Nov 23 08:59:22 crc kubenswrapper[4988]: I1123 08:59:22.222515 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 08:59:22 crc kubenswrapper[4988]: E1123 08:59:22.222792 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:59:34 crc kubenswrapper[4988]: I1123 08:59:34.496186 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 08:59:34 crc kubenswrapper[4988]: E1123 08:59:34.497377 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 08:59:47 crc kubenswrapper[4988]: I1123 08:59:47.497118 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 08:59:47 crc kubenswrapper[4988]: E1123 08:59:47.498338 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.151255 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx"] Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.153081 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.155737 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.157149 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tx7\" (UniqueName: \"kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.157515 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.157583 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.163885 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.169330 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx"] Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.259490 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tx7\" (UniqueName: \"kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.259652 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.259684 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.260781 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.272822 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.275682 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tx7\" (UniqueName: \"kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7\") pod \"collect-profiles-29398140-bk7rx\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.480135 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.497139 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:00:00 crc kubenswrapper[4988]: E1123 09:00:00.497375 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:00:00 crc kubenswrapper[4988]: I1123 09:00:00.958919 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx"] Nov 23 09:00:01 crc kubenswrapper[4988]: I1123 09:00:01.647884 4988 generic.go:334] "Generic (PLEG): container finished" podID="0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" containerID="366459b24a4ac66172a3dfcbc45cc51d976d699a67f47acaacdd1d8e830c4c93" exitCode=0 Nov 23 09:00:01 crc kubenswrapper[4988]: I1123 09:00:01.647977 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" event={"ID":"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1","Type":"ContainerDied","Data":"366459b24a4ac66172a3dfcbc45cc51d976d699a67f47acaacdd1d8e830c4c93"} Nov 23 09:00:01 crc kubenswrapper[4988]: I1123 09:00:01.648529 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" event={"ID":"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1","Type":"ContainerStarted","Data":"bd866822c628eadf25dce2cfe7bd96f05109c199d5a0a3608b3ef19d89a70879"} Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.029489 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.219637 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9tx7\" (UniqueName: \"kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7\") pod \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.220147 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume\") pod \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.220317 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume\") pod \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\" (UID: \"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1\") " Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.221262 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" (UID: "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.226900 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7" (OuterVolumeSpecName: "kube-api-access-p9tx7") pod "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" (UID: "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1"). InnerVolumeSpecName "kube-api-access-p9tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.228262 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" (UID: "0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.323289 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9tx7\" (UniqueName: \"kubernetes.io/projected/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-kube-api-access-p9tx7\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.323328 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.323338 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.670276 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" event={"ID":"0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1","Type":"ContainerDied","Data":"bd866822c628eadf25dce2cfe7bd96f05109c199d5a0a3608b3ef19d89a70879"} Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.670335 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd866822c628eadf25dce2cfe7bd96f05109c199d5a0a3608b3ef19d89a70879" Nov 23 09:00:03 crc kubenswrapper[4988]: I1123 09:00:03.670414 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-bk7rx" Nov 23 09:00:04 crc kubenswrapper[4988]: I1123 09:00:04.145380 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh"] Nov 23 09:00:04 crc kubenswrapper[4988]: I1123 09:00:04.156748 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-sbzjh"] Nov 23 09:00:04 crc kubenswrapper[4988]: I1123 09:00:04.514115 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684c09c5-e69f-40d1-b23b-2f0b6df72025" path="/var/lib/kubelet/pods/684c09c5-e69f-40d1-b23b-2f0b6df72025/volumes" Nov 23 09:00:13 crc kubenswrapper[4988]: I1123 09:00:13.495916 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:00:13 crc kubenswrapper[4988]: E1123 09:00:13.496833 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:00:27 crc kubenswrapper[4988]: I1123 09:00:27.496895 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:00:27 crc kubenswrapper[4988]: E1123 09:00:27.498021 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:00:38 crc kubenswrapper[4988]: I1123 09:00:38.512593 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:00:38 crc kubenswrapper[4988]: E1123 09:00:38.514282 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:00:52 crc kubenswrapper[4988]: I1123 09:00:52.497886 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:00:52 crc kubenswrapper[4988]: E1123 09:00:52.498709 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.168241 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398141-wbcdx"] Nov 23 09:01:00 crc kubenswrapper[4988]: E1123 09:01:00.169377 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.169398 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.169673 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0088e5b8-9a3f-40f6-a04c-49e2f43cfdf1" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.170633 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.179521 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-wbcdx"] Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.263138 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.263525 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.263720 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.263962 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrkb\" (UniqueName: \"kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.366325 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.366486 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.366579 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.366708 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbrkb\" (UniqueName: \"kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.372862 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.372928 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.373578 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.395304 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbrkb\" (UniqueName: \"kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb\") pod \"keystone-cron-29398141-wbcdx\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.495990 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:00 crc kubenswrapper[4988]: I1123 09:01:00.947801 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-wbcdx"] Nov 23 09:01:01 crc kubenswrapper[4988]: I1123 09:01:01.400997 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-wbcdx" event={"ID":"146957be-9dc7-4f00-b343-f4d72b52ea64","Type":"ContainerStarted","Data":"7ecbfe6dbcd64e1f0dcd59e9441c7d7af0b2c2fe2e65968907509d9f6763aa9a"} Nov 23 09:01:01 crc kubenswrapper[4988]: I1123 09:01:01.401356 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-wbcdx" event={"ID":"146957be-9dc7-4f00-b343-f4d72b52ea64","Type":"ContainerStarted","Data":"3f264a97b4370541d16c2d6a5e6d028c4bb7daf8c864eab150424cfac0b97864"} Nov 23 09:01:01 crc kubenswrapper[4988]: I1123 09:01:01.428934 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398141-wbcdx" podStartSLOduration=1.428912637 podStartE2EDuration="1.428912637s" podCreationTimestamp="2025-11-23 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:01.423221567 +0000 UTC m=+8113.731734340" watchObservedRunningTime="2025-11-23 09:01:01.428912637 +0000 UTC m=+8113.737425410" Nov 23 09:01:03 crc kubenswrapper[4988]: I1123 09:01:03.497719 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:01:03 crc kubenswrapper[4988]: E1123 09:01:03.498324 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:03 crc kubenswrapper[4988]: I1123 09:01:03.863518 4988 scope.go:117] "RemoveContainer" containerID="b762c123fbccf4628ab69fc893fade95909e980ce4a3881fcd7e2892321eeacc" Nov 23 09:01:04 crc kubenswrapper[4988]: I1123 09:01:04.433234 4988 generic.go:334] "Generic (PLEG): container finished" podID="146957be-9dc7-4f00-b343-f4d72b52ea64" containerID="7ecbfe6dbcd64e1f0dcd59e9441c7d7af0b2c2fe2e65968907509d9f6763aa9a" exitCode=0 Nov 23 09:01:04 crc kubenswrapper[4988]: I1123 09:01:04.433355 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-wbcdx" event={"ID":"146957be-9dc7-4f00-b343-f4d72b52ea64","Type":"ContainerDied","Data":"7ecbfe6dbcd64e1f0dcd59e9441c7d7af0b2c2fe2e65968907509d9f6763aa9a"} Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.945616 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.987963 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbrkb\" (UniqueName: \"kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb\") pod \"146957be-9dc7-4f00-b343-f4d72b52ea64\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.988226 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data\") pod \"146957be-9dc7-4f00-b343-f4d72b52ea64\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.988271 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys\") pod \"146957be-9dc7-4f00-b343-f4d72b52ea64\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.988338 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle\") pod \"146957be-9dc7-4f00-b343-f4d72b52ea64\" (UID: \"146957be-9dc7-4f00-b343-f4d72b52ea64\") " Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.993858 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb" (OuterVolumeSpecName: "kube-api-access-bbrkb") pod "146957be-9dc7-4f00-b343-f4d72b52ea64" (UID: "146957be-9dc7-4f00-b343-f4d72b52ea64"). InnerVolumeSpecName "kube-api-access-bbrkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:01:05 crc kubenswrapper[4988]: I1123 09:01:05.995302 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "146957be-9dc7-4f00-b343-f4d72b52ea64" (UID: "146957be-9dc7-4f00-b343-f4d72b52ea64"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.023346 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "146957be-9dc7-4f00-b343-f4d72b52ea64" (UID: "146957be-9dc7-4f00-b343-f4d72b52ea64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.048510 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data" (OuterVolumeSpecName: "config-data") pod "146957be-9dc7-4f00-b343-f4d72b52ea64" (UID: "146957be-9dc7-4f00-b343-f4d72b52ea64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.095692 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.095723 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbrkb\" (UniqueName: \"kubernetes.io/projected/146957be-9dc7-4f00-b343-f4d72b52ea64-kube-api-access-bbrkb\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.095736 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.095745 4988 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/146957be-9dc7-4f00-b343-f4d72b52ea64-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.461346 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-wbcdx" event={"ID":"146957be-9dc7-4f00-b343-f4d72b52ea64","Type":"ContainerDied","Data":"3f264a97b4370541d16c2d6a5e6d028c4bb7daf8c864eab150424cfac0b97864"} Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.461693 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-wbcdx" Nov 23 09:01:06 crc kubenswrapper[4988]: I1123 09:01:06.461717 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f264a97b4370541d16c2d6a5e6d028c4bb7daf8c864eab150424cfac0b97864" Nov 23 09:01:16 crc kubenswrapper[4988]: I1123 09:01:16.496315 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:01:16 crc kubenswrapper[4988]: E1123 09:01:16.497232 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:30 crc kubenswrapper[4988]: I1123 09:01:30.497722 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:01:30 crc kubenswrapper[4988]: E1123 09:01:30.498727 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:41 crc kubenswrapper[4988]: I1123 09:01:41.496590 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:01:41 crc kubenswrapper[4988]: E1123 09:01:41.497384 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:47 crc kubenswrapper[4988]: I1123 09:01:47.925255 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:01:47 crc kubenswrapper[4988]: E1123 09:01:47.930155 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146957be-9dc7-4f00-b343-f4d72b52ea64" containerName="keystone-cron" Nov 23 09:01:47 crc kubenswrapper[4988]: I1123 09:01:47.930380 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="146957be-9dc7-4f00-b343-f4d72b52ea64" containerName="keystone-cron" Nov 23 09:01:47 crc kubenswrapper[4988]: I1123 09:01:47.933140 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="146957be-9dc7-4f00-b343-f4d72b52ea64" containerName="keystone-cron" Nov 23 09:01:47 crc kubenswrapper[4988]: I1123 09:01:47.940355 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:47 crc kubenswrapper[4988]: I1123 09:01:47.948950 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.070926 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.070988 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7nkr\" (UniqueName: \"kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.071063 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.172872 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.173171 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7nkr\" (UniqueName: \"kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.173219 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.173377 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.173686 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.194640 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7nkr\" (UniqueName: \"kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr\") pod \"certified-operators-7g6n5\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.287122 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.823009 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:01:48 crc kubenswrapper[4988]: I1123 09:01:48.900073 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerStarted","Data":"387f7306215b7b2a985fcb7a16f197e0bf5219c96c995c1a3039a13423ae46ec"} Nov 23 09:01:49 crc kubenswrapper[4988]: I1123 09:01:49.917336 4988 generic.go:334] "Generic (PLEG): container finished" podID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerID="795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba" exitCode=0 Nov 23 09:01:49 crc kubenswrapper[4988]: I1123 09:01:49.917424 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerDied","Data":"795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba"} Nov 23 09:01:50 crc kubenswrapper[4988]: I1123 09:01:50.933819 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerStarted","Data":"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe"} Nov 23 09:01:52 crc kubenswrapper[4988]: I1123 09:01:52.965008 4988 generic.go:334] "Generic (PLEG): container finished" podID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerID="bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe" exitCode=0 Nov 23 09:01:52 crc kubenswrapper[4988]: I1123 09:01:52.965105 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerDied","Data":"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe"} Nov 23 09:01:53 crc kubenswrapper[4988]: I1123 09:01:53.496545 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:01:53 crc kubenswrapper[4988]: E1123 09:01:53.497107 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:01:53 crc kubenswrapper[4988]: I1123 09:01:53.982155 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerStarted","Data":"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1"} Nov 23 09:01:54 crc kubenswrapper[4988]: I1123 09:01:54.015021 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7g6n5" podStartSLOduration=3.436978292 podStartE2EDuration="7.014994472s" podCreationTimestamp="2025-11-23 09:01:47 +0000 UTC" firstStartedPulling="2025-11-23 09:01:49.920274839 +0000 UTC m=+8162.228787642" lastFinishedPulling="2025-11-23 09:01:53.498291049 +0000 UTC m=+8165.806803822" observedRunningTime="2025-11-23 09:01:54.004688007 +0000 UTC m=+8166.313200810" watchObservedRunningTime="2025-11-23 09:01:54.014994472 +0000 UTC m=+8166.323507255" Nov 23 09:01:58 crc kubenswrapper[4988]: I1123 09:01:58.287883 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:58 crc kubenswrapper[4988]: I1123 09:01:58.288836 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:58 crc kubenswrapper[4988]: I1123 09:01:58.356253 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:59 crc kubenswrapper[4988]: I1123 09:01:59.099179 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:01:59 crc kubenswrapper[4988]: I1123 09:01:59.158224 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.054400 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7g6n5" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="registry-server" containerID="cri-o://d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1" gracePeriod=2 Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.563810 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.670671 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content\") pod \"4c4eca40-28b0-41e0-93ef-ca804856c8de\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.670775 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities\") pod \"4c4eca40-28b0-41e0-93ef-ca804856c8de\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.671027 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7nkr\" (UniqueName: \"kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr\") pod \"4c4eca40-28b0-41e0-93ef-ca804856c8de\" (UID: \"4c4eca40-28b0-41e0-93ef-ca804856c8de\") " Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.673092 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities" (OuterVolumeSpecName: "utilities") pod "4c4eca40-28b0-41e0-93ef-ca804856c8de" (UID: "4c4eca40-28b0-41e0-93ef-ca804856c8de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.676723 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr" (OuterVolumeSpecName: "kube-api-access-k7nkr") pod "4c4eca40-28b0-41e0-93ef-ca804856c8de" (UID: "4c4eca40-28b0-41e0-93ef-ca804856c8de"). InnerVolumeSpecName "kube-api-access-k7nkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.773592 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7nkr\" (UniqueName: \"kubernetes.io/projected/4c4eca40-28b0-41e0-93ef-ca804856c8de-kube-api-access-k7nkr\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.773620 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.907601 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c4eca40-28b0-41e0-93ef-ca804856c8de" (UID: "4c4eca40-28b0-41e0-93ef-ca804856c8de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:02:01 crc kubenswrapper[4988]: I1123 09:02:01.978535 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4eca40-28b0-41e0-93ef-ca804856c8de-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.066396 4988 generic.go:334] "Generic (PLEG): container finished" podID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerID="d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1" exitCode=0 Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.066464 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7g6n5" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.066477 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerDied","Data":"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1"} Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.066537 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7g6n5" event={"ID":"4c4eca40-28b0-41e0-93ef-ca804856c8de","Type":"ContainerDied","Data":"387f7306215b7b2a985fcb7a16f197e0bf5219c96c995c1a3039a13423ae46ec"} Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.066568 4988 scope.go:117] "RemoveContainer" containerID="d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.117362 4988 scope.go:117] "RemoveContainer" containerID="bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.119619 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.132557 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7g6n5"] Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.159512 4988 scope.go:117] "RemoveContainer" containerID="795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.206025 4988 scope.go:117] "RemoveContainer" containerID="d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1" Nov 23 09:02:02 crc kubenswrapper[4988]: E1123 09:02:02.206665 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1\": container with ID starting with d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1 not found: ID does not exist" containerID="d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.206694 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1"} err="failed to get container status \"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1\": rpc error: code = NotFound desc = could not find container \"d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1\": container with ID starting with d4af9e4aae54d3e41317e7f87ed23b21943bca798485e6b1cd13e643da9149c1 not found: ID does not exist" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.206736 4988 scope.go:117] "RemoveContainer" containerID="bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe" Nov 23 09:02:02 crc kubenswrapper[4988]: E1123 09:02:02.207337 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe\": container with ID starting with bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe not found: ID does not exist" containerID="bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.207379 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe"} err="failed to get container status \"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe\": rpc error: code = NotFound desc = could not find container \"bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe\": container with ID starting with bf42e3f16f1aa15f5cb34f54893f2a1b328406fda45e52d3b473e876a8ec6ebe not found: ID does not exist" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.207406 4988 scope.go:117] "RemoveContainer" containerID="795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba" Nov 23 09:02:02 crc kubenswrapper[4988]: E1123 09:02:02.207826 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba\": container with ID starting with 795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba not found: ID does not exist" containerID="795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.207849 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba"} err="failed to get container status \"795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba\": rpc error: code = NotFound desc = could not find container \"795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba\": container with ID starting with 795ac2e295b72b2fb7c62bc6efad15fe2586dd813a47ff56d6b09a3719d381ba not found: ID does not exist" Nov 23 09:02:02 crc kubenswrapper[4988]: I1123 09:02:02.516974 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" path="/var/lib/kubelet/pods/4c4eca40-28b0-41e0-93ef-ca804856c8de/volumes" Nov 23 09:02:08 crc kubenswrapper[4988]: I1123 09:02:08.513774 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:02:08 crc kubenswrapper[4988]: E1123 09:02:08.514595 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:02:20 crc kubenswrapper[4988]: I1123 09:02:20.496986 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:02:20 crc kubenswrapper[4988]: E1123 09:02:20.498032 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:02:32 crc kubenswrapper[4988]: I1123 09:02:32.497516 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:02:32 crc kubenswrapper[4988]: E1123 09:02:32.498753 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.429364 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:39 crc kubenswrapper[4988]: E1123 09:02:39.431282 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="extract-utilities" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.431375 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="extract-utilities" Nov 23 09:02:39 crc kubenswrapper[4988]: E1123 09:02:39.431441 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="registry-server" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.431504 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="registry-server" Nov 23 09:02:39 crc kubenswrapper[4988]: E1123 09:02:39.431573 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="extract-content" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.431629 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="extract-content" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.433100 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4eca40-28b0-41e0-93ef-ca804856c8de" containerName="registry-server" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.435485 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.445871 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.528152 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.528251 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.528326 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjvhr\" (UniqueName: \"kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.630178 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.630329 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.630387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjvhr\" (UniqueName: \"kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.630768 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.630794 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.652560 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjvhr\" (UniqueName: \"kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr\") pod \"redhat-marketplace-k9bmh\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:39 crc kubenswrapper[4988]: I1123 09:02:39.826246 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:40 crc kubenswrapper[4988]: I1123 09:02:40.268873 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:40 crc kubenswrapper[4988]: I1123 09:02:40.548388 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerStarted","Data":"328b5120baa0ea1de4d2515a7b3381767d71d51105f01d0ebc8583a5df9ac7c1"} Nov 23 09:02:41 crc kubenswrapper[4988]: I1123 09:02:41.570746 4988 generic.go:334] "Generic (PLEG): container finished" podID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerID="84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c" exitCode=0 Nov 23 09:02:41 crc kubenswrapper[4988]: I1123 09:02:41.571107 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerDied","Data":"84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c"} Nov 23 09:02:42 crc kubenswrapper[4988]: I1123 09:02:42.585685 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerStarted","Data":"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe"} Nov 23 09:02:43 crc kubenswrapper[4988]: I1123 09:02:43.595461 4988 generic.go:334] "Generic (PLEG): container finished" podID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerID="7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe" exitCode=0 Nov 23 09:02:43 crc kubenswrapper[4988]: I1123 09:02:43.595540 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerDied","Data":"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe"} Nov 23 09:02:44 crc kubenswrapper[4988]: I1123 09:02:44.609981 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerStarted","Data":"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807"} Nov 23 09:02:44 crc kubenswrapper[4988]: I1123 09:02:44.630150 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k9bmh" podStartSLOduration=2.892036123 podStartE2EDuration="5.630130879s" podCreationTimestamp="2025-11-23 09:02:39 +0000 UTC" firstStartedPulling="2025-11-23 09:02:41.572702438 +0000 UTC m=+8213.881215241" lastFinishedPulling="2025-11-23 09:02:44.310797234 +0000 UTC m=+8216.619309997" observedRunningTime="2025-11-23 09:02:44.627938175 +0000 UTC m=+8216.936450958" watchObservedRunningTime="2025-11-23 09:02:44.630130879 +0000 UTC m=+8216.938643642" Nov 23 09:02:47 crc kubenswrapper[4988]: I1123 09:02:47.496980 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:02:47 crc kubenswrapper[4988]: E1123 09:02:47.499520 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:02:49 crc kubenswrapper[4988]: I1123 09:02:49.826406 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:49 crc kubenswrapper[4988]: I1123 09:02:49.827994 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:49 crc kubenswrapper[4988]: I1123 09:02:49.887573 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:50 crc kubenswrapper[4988]: I1123 09:02:50.760921 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:50 crc kubenswrapper[4988]: I1123 09:02:50.812575 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:52 crc kubenswrapper[4988]: I1123 09:02:52.705326 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k9bmh" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="registry-server" containerID="cri-o://f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807" gracePeriod=2 Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.138117 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.214693 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities\") pod \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.214871 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content\") pod \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.214922 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjvhr\" (UniqueName: \"kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr\") pod \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\" (UID: \"6fcc058e-fbdc-4bc4-a71c-58157833dfd9\") " Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.216058 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities" (OuterVolumeSpecName: "utilities") pod "6fcc058e-fbdc-4bc4-a71c-58157833dfd9" (UID: "6fcc058e-fbdc-4bc4-a71c-58157833dfd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.228592 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr" (OuterVolumeSpecName: "kube-api-access-xjvhr") pod "6fcc058e-fbdc-4bc4-a71c-58157833dfd9" (UID: "6fcc058e-fbdc-4bc4-a71c-58157833dfd9"). InnerVolumeSpecName "kube-api-access-xjvhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.233141 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fcc058e-fbdc-4bc4-a71c-58157833dfd9" (UID: "6fcc058e-fbdc-4bc4-a71c-58157833dfd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.317023 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.317279 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjvhr\" (UniqueName: \"kubernetes.io/projected/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-kube-api-access-xjvhr\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.317414 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fcc058e-fbdc-4bc4-a71c-58157833dfd9-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.730429 4988 generic.go:334] "Generic (PLEG): container finished" podID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerID="f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807" exitCode=0 Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.730499 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9bmh" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.730505 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerDied","Data":"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807"} Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.731083 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9bmh" event={"ID":"6fcc058e-fbdc-4bc4-a71c-58157833dfd9","Type":"ContainerDied","Data":"328b5120baa0ea1de4d2515a7b3381767d71d51105f01d0ebc8583a5df9ac7c1"} Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.731133 4988 scope.go:117] "RemoveContainer" containerID="f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.761638 4988 scope.go:117] "RemoveContainer" containerID="7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.784318 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.793632 4988 scope.go:117] "RemoveContainer" containerID="84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.795628 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9bmh"] Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.857396 4988 scope.go:117] "RemoveContainer" containerID="f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807" Nov 23 09:02:53 crc kubenswrapper[4988]: E1123 09:02:53.857769 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807\": container with ID starting with f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807 not found: ID does not exist" containerID="f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.857821 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807"} err="failed to get container status \"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807\": rpc error: code = NotFound desc = could not find container \"f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807\": container with ID starting with f828588141394e5af0257326d90a321ce8126cdf5c24bc29f7db7eacdcf93807 not found: ID does not exist" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.857851 4988 scope.go:117] "RemoveContainer" containerID="7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe" Nov 23 09:02:53 crc kubenswrapper[4988]: E1123 09:02:53.858240 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe\": container with ID starting with 7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe not found: ID does not exist" containerID="7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.858270 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe"} err="failed to get container status \"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe\": rpc error: code = NotFound desc = could not find container \"7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe\": container with ID starting with 7f76afdc7e75c3aedd15396f070b501aa7a4c457c72ab956f1f89983bf778abe not found: ID does not exist" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.858369 4988 scope.go:117] "RemoveContainer" containerID="84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c" Nov 23 09:02:53 crc kubenswrapper[4988]: E1123 09:02:53.858680 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c\": container with ID starting with 84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c not found: ID does not exist" containerID="84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c" Nov 23 09:02:53 crc kubenswrapper[4988]: I1123 09:02:53.858705 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c"} err="failed to get container status \"84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c\": rpc error: code = NotFound desc = could not find container \"84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c\": container with ID starting with 84e8a1eecebe2f086c337045907d81f8c794784290d0291082612f7e9218ab0c not found: ID does not exist" Nov 23 09:02:54 crc kubenswrapper[4988]: I1123 09:02:54.515084 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" path="/var/lib/kubelet/pods/6fcc058e-fbdc-4bc4-a71c-58157833dfd9/volumes" Nov 23 09:03:02 crc kubenswrapper[4988]: I1123 09:03:02.496698 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:03:02 crc kubenswrapper[4988]: E1123 09:03:02.499658 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:03:15 crc kubenswrapper[4988]: I1123 09:03:15.496365 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:03:15 crc kubenswrapper[4988]: E1123 09:03:15.497272 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:03:28 crc kubenswrapper[4988]: I1123 09:03:28.503760 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:03:28 crc kubenswrapper[4988]: E1123 09:03:28.504450 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:03:31 crc kubenswrapper[4988]: I1123 09:03:31.153142 4988 generic.go:334] "Generic (PLEG): container finished" podID="34592fcb-7601-4940-a40a-3fc5de6c9d01" containerID="8f03bc1bfc5f548981ddd406dfa12afb7d4ef6c7c670afcbc8d895a4ae22a1bf" exitCode=0 Nov 23 09:03:31 crc kubenswrapper[4988]: I1123 09:03:31.153243 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" event={"ID":"34592fcb-7601-4940-a40a-3fc5de6c9d01","Type":"ContainerDied","Data":"8f03bc1bfc5f548981ddd406dfa12afb7d4ef6c7c670afcbc8d895a4ae22a1bf"} Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.691012 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.692802 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.692866 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kbk2\" (UniqueName: \"kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.692900 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.692933 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.692997 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.693016 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.693059 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1\") pod \"34592fcb-7601-4940-a40a-3fc5de6c9d01\" (UID: \"34592fcb-7601-4940-a40a-3fc5de6c9d01\") " Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.698119 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2" (OuterVolumeSpecName: "kube-api-access-4kbk2") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "kube-api-access-4kbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.698357 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.723103 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.730018 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.732550 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.741279 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.748792 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory" (OuterVolumeSpecName: "inventory") pod "34592fcb-7601-4940-a40a-3fc5de6c9d01" (UID: "34592fcb-7601-4940-a40a-3fc5de6c9d01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.794921 4988 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.794961 4988 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.794979 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kbk2\" (UniqueName: \"kubernetes.io/projected/34592fcb-7601-4940-a40a-3fc5de6c9d01-kube-api-access-4kbk2\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.794991 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.795024 4988 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.795039 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:32 crc kubenswrapper[4988]: I1123 09:03:32.795051 4988 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34592fcb-7601-4940-a40a-3fc5de6c9d01-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.179898 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" event={"ID":"34592fcb-7601-4940-a40a-3fc5de6c9d01","Type":"ContainerDied","Data":"834257ab67e841631da52cbcfdf2a22d39937a369ffe66e1e3b64958c21033b6"} Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.180278 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834257ab67e841631da52cbcfdf2a22d39937a369ffe66e1e3b64958c21033b6" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.179982 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-67jsk" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.358884 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-gvl89"] Nov 23 09:03:33 crc kubenswrapper[4988]: E1123 09:03:33.359379 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="registry-server" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359403 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="registry-server" Nov 23 09:03:33 crc kubenswrapper[4988]: E1123 09:03:33.359423 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="extract-content" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359431 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="extract-content" Nov 23 09:03:33 crc kubenswrapper[4988]: E1123 09:03:33.359451 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34592fcb-7601-4940-a40a-3fc5de6c9d01" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359459 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="34592fcb-7601-4940-a40a-3fc5de6c9d01" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:03:33 crc kubenswrapper[4988]: E1123 09:03:33.359483 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="extract-utilities" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359492 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="extract-utilities" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359747 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcc058e-fbdc-4bc4-a71c-58157833dfd9" containerName="registry-server" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.359781 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="34592fcb-7601-4940-a40a-3fc5de6c9d01" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.360682 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.362740 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.363220 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.363531 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.363933 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.364108 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.371551 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-gvl89"] Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.406513 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.406597 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.406626 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8q7s\" (UniqueName: \"kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.406652 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.406871 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.509039 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.509088 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8q7s\" (UniqueName: \"kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.509128 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.509171 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.509289 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.513905 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.513911 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.520128 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.520715 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.527391 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8q7s\" (UniqueName: \"kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s\") pod \"neutron-sriov-openstack-openstack-cell1-gvl89\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:33 crc kubenswrapper[4988]: I1123 09:03:33.682154 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:03:34 crc kubenswrapper[4988]: I1123 09:03:34.266399 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-gvl89"] Nov 23 09:03:35 crc kubenswrapper[4988]: I1123 09:03:35.210164 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" event={"ID":"9667739d-8f5f-4d13-8054-ed5d92987432","Type":"ContainerStarted","Data":"7ae640973510a57bf0334754b711174de26973ae56c9c2cbf21ec218300673ec"} Nov 23 09:03:36 crc kubenswrapper[4988]: I1123 09:03:36.221861 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" event={"ID":"9667739d-8f5f-4d13-8054-ed5d92987432","Type":"ContainerStarted","Data":"6ee3f03674ae0aaa85f6125e8a521b57fd891119b6c9f6d91518767432f68380"} Nov 23 09:03:36 crc kubenswrapper[4988]: I1123 09:03:36.243093 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" podStartSLOduration=2.397993282 podStartE2EDuration="3.243072273s" podCreationTimestamp="2025-11-23 09:03:33 +0000 UTC" firstStartedPulling="2025-11-23 09:03:34.263066982 +0000 UTC m=+8266.571579745" lastFinishedPulling="2025-11-23 09:03:35.108145973 +0000 UTC m=+8267.416658736" observedRunningTime="2025-11-23 09:03:36.238134411 +0000 UTC m=+8268.546647184" watchObservedRunningTime="2025-11-23 09:03:36.243072273 +0000 UTC m=+8268.551585036" Nov 23 09:03:42 crc kubenswrapper[4988]: I1123 09:03:42.497093 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:03:42 crc kubenswrapper[4988]: E1123 09:03:42.498080 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:03:53 crc kubenswrapper[4988]: I1123 09:03:53.497157 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:03:53 crc kubenswrapper[4988]: E1123 09:03:53.498457 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:04:06 crc kubenswrapper[4988]: I1123 09:04:06.495868 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:04:06 crc kubenswrapper[4988]: E1123 09:04:06.496669 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:04:20 crc kubenswrapper[4988]: I1123 09:04:20.496526 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:04:20 crc kubenswrapper[4988]: E1123 09:04:20.497365 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:04:32 crc kubenswrapper[4988]: I1123 09:04:32.496924 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:04:32 crc kubenswrapper[4988]: I1123 09:04:32.873866 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e"} Nov 23 09:05:49 crc kubenswrapper[4988]: I1123 09:05:49.731346 4988 generic.go:334] "Generic (PLEG): container finished" podID="9667739d-8f5f-4d13-8054-ed5d92987432" containerID="6ee3f03674ae0aaa85f6125e8a521b57fd891119b6c9f6d91518767432f68380" exitCode=0 Nov 23 09:05:49 crc kubenswrapper[4988]: I1123 09:05:49.731468 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" event={"ID":"9667739d-8f5f-4d13-8054-ed5d92987432","Type":"ContainerDied","Data":"6ee3f03674ae0aaa85f6125e8a521b57fd891119b6c9f6d91518767432f68380"} Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.198812 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.300240 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory\") pod \"9667739d-8f5f-4d13-8054-ed5d92987432\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.300406 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle\") pod \"9667739d-8f5f-4d13-8054-ed5d92987432\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.300434 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0\") pod \"9667739d-8f5f-4d13-8054-ed5d92987432\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.300492 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key\") pod \"9667739d-8f5f-4d13-8054-ed5d92987432\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.300538 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8q7s\" (UniqueName: \"kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s\") pod \"9667739d-8f5f-4d13-8054-ed5d92987432\" (UID: \"9667739d-8f5f-4d13-8054-ed5d92987432\") " Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.305535 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "9667739d-8f5f-4d13-8054-ed5d92987432" (UID: "9667739d-8f5f-4d13-8054-ed5d92987432"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.306401 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s" (OuterVolumeSpecName: "kube-api-access-b8q7s") pod "9667739d-8f5f-4d13-8054-ed5d92987432" (UID: "9667739d-8f5f-4d13-8054-ed5d92987432"). InnerVolumeSpecName "kube-api-access-b8q7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.330548 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9667739d-8f5f-4d13-8054-ed5d92987432" (UID: "9667739d-8f5f-4d13-8054-ed5d92987432"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.332660 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "9667739d-8f5f-4d13-8054-ed5d92987432" (UID: "9667739d-8f5f-4d13-8054-ed5d92987432"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.335032 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory" (OuterVolumeSpecName: "inventory") pod "9667739d-8f5f-4d13-8054-ed5d92987432" (UID: "9667739d-8f5f-4d13-8054-ed5d92987432"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.402250 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.402427 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.402495 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.402561 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8q7s\" (UniqueName: \"kubernetes.io/projected/9667739d-8f5f-4d13-8054-ed5d92987432-kube-api-access-b8q7s\") on node \"crc\" DevicePath \"\"" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.402616 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9667739d-8f5f-4d13-8054-ed5d92987432-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.758675 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" event={"ID":"9667739d-8f5f-4d13-8054-ed5d92987432","Type":"ContainerDied","Data":"7ae640973510a57bf0334754b711174de26973ae56c9c2cbf21ec218300673ec"} Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.758999 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae640973510a57bf0334754b711174de26973ae56c9c2cbf21ec218300673ec" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.758727 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-gvl89" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.848232 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w"] Nov 23 09:05:51 crc kubenswrapper[4988]: E1123 09:05:51.848731 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9667739d-8f5f-4d13-8054-ed5d92987432" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.848752 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9667739d-8f5f-4d13-8054-ed5d92987432" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.849018 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9667739d-8f5f-4d13-8054-ed5d92987432" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.849989 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.854531 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.854701 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.854735 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.855825 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.854739 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.868157 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w"] Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.920249 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.920579 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.920714 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw7b7\" (UniqueName: \"kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.920869 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:51 crc kubenswrapper[4988]: I1123 09:05:51.921012 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.022634 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.022723 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.022802 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.022885 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.022913 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw7b7\" (UniqueName: \"kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.027530 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.028122 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.036101 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.036120 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.048333 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw7b7\" (UniqueName: \"kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7\") pod \"neutron-dhcp-openstack-openstack-cell1-nrd2w\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.177404 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.742914 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w"] Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.753807 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:05:52 crc kubenswrapper[4988]: I1123 09:05:52.769506 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" event={"ID":"890a5459-3557-40c1-a1fc-e44689e6525d","Type":"ContainerStarted","Data":"541236f5b2be4721381fb1d83e46747ed51e76cd58db704db221d4e276a54c4a"} Nov 23 09:05:53 crc kubenswrapper[4988]: I1123 09:05:53.782463 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" event={"ID":"890a5459-3557-40c1-a1fc-e44689e6525d","Type":"ContainerStarted","Data":"df00c2d9484377b5f5656e1a2f72bb17fb2a9c6a2c08f5a89002f934408ebd0a"} Nov 23 09:05:53 crc kubenswrapper[4988]: I1123 09:05:53.809879 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" podStartSLOduration=2.366995165 podStartE2EDuration="2.809859827s" podCreationTimestamp="2025-11-23 09:05:51 +0000 UTC" firstStartedPulling="2025-11-23 09:05:52.753616328 +0000 UTC m=+8405.062129091" lastFinishedPulling="2025-11-23 09:05:53.19648099 +0000 UTC m=+8405.504993753" observedRunningTime="2025-11-23 09:05:53.802493485 +0000 UTC m=+8406.111006248" watchObservedRunningTime="2025-11-23 09:05:53.809859827 +0000 UTC m=+8406.118372590" Nov 23 09:06:51 crc kubenswrapper[4988]: I1123 09:06:51.672760 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:06:51 crc kubenswrapper[4988]: I1123 09:06:51.673507 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.699890 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.705337 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.725807 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.857936 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.858085 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.858148 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhjn\" (UniqueName: \"kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.959919 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.959996 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhjn\" (UniqueName: \"kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.960076 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.960555 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.960624 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:54 crc kubenswrapper[4988]: I1123 09:06:54.979049 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhjn\" (UniqueName: \"kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn\") pod \"community-operators-n897z\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:55 crc kubenswrapper[4988]: I1123 09:06:55.025988 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:06:55 crc kubenswrapper[4988]: I1123 09:06:55.521353 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:06:55 crc kubenswrapper[4988]: W1123 09:06:55.534648 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25e808ba_74b3_40f8_aea3_7e8c389f92eb.slice/crio-68d5e9a82168e40c185ef13c6862656a5f4bb0540fcd4ed128a7cf3935c81e04 WatchSource:0}: Error finding container 68d5e9a82168e40c185ef13c6862656a5f4bb0540fcd4ed128a7cf3935c81e04: Status 404 returned error can't find the container with id 68d5e9a82168e40c185ef13c6862656a5f4bb0540fcd4ed128a7cf3935c81e04 Nov 23 09:06:56 crc kubenswrapper[4988]: I1123 09:06:56.541080 4988 generic.go:334] "Generic (PLEG): container finished" podID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerID="d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f" exitCode=0 Nov 23 09:06:56 crc kubenswrapper[4988]: I1123 09:06:56.541224 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerDied","Data":"d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f"} Nov 23 09:06:56 crc kubenswrapper[4988]: I1123 09:06:56.541402 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerStarted","Data":"68d5e9a82168e40c185ef13c6862656a5f4bb0540fcd4ed128a7cf3935c81e04"} Nov 23 09:06:57 crc kubenswrapper[4988]: I1123 09:06:57.559921 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerStarted","Data":"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a"} Nov 23 09:06:59 crc kubenswrapper[4988]: I1123 09:06:59.579859 4988 generic.go:334] "Generic (PLEG): container finished" podID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerID="bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a" exitCode=0 Nov 23 09:06:59 crc kubenswrapper[4988]: I1123 09:06:59.579929 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerDied","Data":"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a"} Nov 23 09:07:00 crc kubenswrapper[4988]: I1123 09:07:00.591638 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerStarted","Data":"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d"} Nov 23 09:07:00 crc kubenswrapper[4988]: I1123 09:07:00.620439 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n897z" podStartSLOduration=3.090868726 podStartE2EDuration="6.620417991s" podCreationTimestamp="2025-11-23 09:06:54 +0000 UTC" firstStartedPulling="2025-11-23 09:06:56.543897598 +0000 UTC m=+8468.852410351" lastFinishedPulling="2025-11-23 09:07:00.073446843 +0000 UTC m=+8472.381959616" observedRunningTime="2025-11-23 09:07:00.60940654 +0000 UTC m=+8472.917919343" watchObservedRunningTime="2025-11-23 09:07:00.620417991 +0000 UTC m=+8472.928930774" Nov 23 09:07:05 crc kubenswrapper[4988]: I1123 09:07:05.026549 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:05 crc kubenswrapper[4988]: I1123 09:07:05.027276 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:06 crc kubenswrapper[4988]: I1123 09:07:06.078637 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n897z" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="registry-server" probeResult="failure" output=< Nov 23 09:07:06 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:07:06 crc kubenswrapper[4988]: > Nov 23 09:07:15 crc kubenswrapper[4988]: I1123 09:07:15.088241 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:15 crc kubenswrapper[4988]: I1123 09:07:15.159142 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:15 crc kubenswrapper[4988]: I1123 09:07:15.331521 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:07:16 crc kubenswrapper[4988]: I1123 09:07:16.771163 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n897z" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="registry-server" containerID="cri-o://b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d" gracePeriod=2 Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.273476 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.341547 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content\") pod \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.341754 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhjn\" (UniqueName: \"kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn\") pod \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.341874 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities\") pod \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\" (UID: \"25e808ba-74b3-40f8-aea3-7e8c389f92eb\") " Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.342839 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities" (OuterVolumeSpecName: "utilities") pod "25e808ba-74b3-40f8-aea3-7e8c389f92eb" (UID: "25e808ba-74b3-40f8-aea3-7e8c389f92eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.354676 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn" (OuterVolumeSpecName: "kube-api-access-jhhjn") pod "25e808ba-74b3-40f8-aea3-7e8c389f92eb" (UID: "25e808ba-74b3-40f8-aea3-7e8c389f92eb"). InnerVolumeSpecName "kube-api-access-jhhjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.386854 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25e808ba-74b3-40f8-aea3-7e8c389f92eb" (UID: "25e808ba-74b3-40f8-aea3-7e8c389f92eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.444339 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.444382 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhjn\" (UniqueName: \"kubernetes.io/projected/25e808ba-74b3-40f8-aea3-7e8c389f92eb-kube-api-access-jhhjn\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.444398 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25e808ba-74b3-40f8-aea3-7e8c389f92eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.782426 4988 generic.go:334] "Generic (PLEG): container finished" podID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerID="b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d" exitCode=0 Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.782470 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerDied","Data":"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d"} Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.782482 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n897z" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.782500 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n897z" event={"ID":"25e808ba-74b3-40f8-aea3-7e8c389f92eb","Type":"ContainerDied","Data":"68d5e9a82168e40c185ef13c6862656a5f4bb0540fcd4ed128a7cf3935c81e04"} Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.782521 4988 scope.go:117] "RemoveContainer" containerID="b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.803137 4988 scope.go:117] "RemoveContainer" containerID="bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.820524 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.831246 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n897z"] Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.842709 4988 scope.go:117] "RemoveContainer" containerID="d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.877041 4988 scope.go:117] "RemoveContainer" containerID="b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d" Nov 23 09:07:17 crc kubenswrapper[4988]: E1123 09:07:17.877457 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d\": container with ID starting with b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d not found: ID does not exist" containerID="b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.877486 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d"} err="failed to get container status \"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d\": rpc error: code = NotFound desc = could not find container \"b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d\": container with ID starting with b0b1509018bb68fb97539358c2400a416d47b03f5d2556ae5b856faf98ed4a4d not found: ID does not exist" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.877509 4988 scope.go:117] "RemoveContainer" containerID="bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a" Nov 23 09:07:17 crc kubenswrapper[4988]: E1123 09:07:17.877906 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a\": container with ID starting with bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a not found: ID does not exist" containerID="bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.877951 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a"} err="failed to get container status \"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a\": rpc error: code = NotFound desc = could not find container \"bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a\": container with ID starting with bb436d44a32aa8a3bca4a5ddedcd6b80d26595d0f838ec65718dd7c3c0e4967a not found: ID does not exist" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.877989 4988 scope.go:117] "RemoveContainer" containerID="d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f" Nov 23 09:07:17 crc kubenswrapper[4988]: E1123 09:07:17.878286 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f\": container with ID starting with d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f not found: ID does not exist" containerID="d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f" Nov 23 09:07:17 crc kubenswrapper[4988]: I1123 09:07:17.878309 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f"} err="failed to get container status \"d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f\": rpc error: code = NotFound desc = could not find container \"d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f\": container with ID starting with d9ffa4510608659774f8cf3afaa1855706db4a8c6ac4dcc985aeccdfdf75fc8f not found: ID does not exist" Nov 23 09:07:18 crc kubenswrapper[4988]: I1123 09:07:18.509587 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" path="/var/lib/kubelet/pods/25e808ba-74b3-40f8-aea3-7e8c389f92eb/volumes" Nov 23 09:07:21 crc kubenswrapper[4988]: I1123 09:07:21.672703 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:07:21 crc kubenswrapper[4988]: I1123 09:07:21.673032 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:07:51 crc kubenswrapper[4988]: I1123 09:07:51.672134 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:07:51 crc kubenswrapper[4988]: I1123 09:07:51.673880 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:07:51 crc kubenswrapper[4988]: I1123 09:07:51.674139 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:07:51 crc kubenswrapper[4988]: I1123 09:07:51.675300 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:07:51 crc kubenswrapper[4988]: I1123 09:07:51.675544 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e" gracePeriod=600 Nov 23 09:07:52 crc kubenswrapper[4988]: I1123 09:07:52.207655 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e" exitCode=0 Nov 23 09:07:52 crc kubenswrapper[4988]: I1123 09:07:52.207731 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e"} Nov 23 09:07:52 crc kubenswrapper[4988]: I1123 09:07:52.208056 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66"} Nov 23 09:07:52 crc kubenswrapper[4988]: I1123 09:07:52.208088 4988 scope.go:117] "RemoveContainer" containerID="129b465065e09b436763796c4b4544336cde4e8d9f74c10a3e0fbecc92ee663e" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.684138 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:02 crc kubenswrapper[4988]: E1123 09:09:02.685619 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="extract-utilities" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.685634 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="extract-utilities" Nov 23 09:09:02 crc kubenswrapper[4988]: E1123 09:09:02.685666 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="registry-server" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.685672 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="registry-server" Nov 23 09:09:02 crc kubenswrapper[4988]: E1123 09:09:02.685704 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="extract-content" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.685711 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="extract-content" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.685920 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e808ba-74b3-40f8-aea3-7e8c389f92eb" containerName="registry-server" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.689893 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.698595 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.768395 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.768466 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.768531 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdqz\" (UniqueName: \"kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.870130 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.870222 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.870287 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdqz\" (UniqueName: \"kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.870789 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.870798 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:02 crc kubenswrapper[4988]: I1123 09:09:02.903081 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdqz\" (UniqueName: \"kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz\") pod \"redhat-operators-gc89r\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:03 crc kubenswrapper[4988]: I1123 09:09:03.032673 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:03 crc kubenswrapper[4988]: I1123 09:09:03.481119 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:04 crc kubenswrapper[4988]: I1123 09:09:04.037635 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerID="c27332734a445bdc5445d41a43501e32a416376a19d8172d79b0d77473c4018b" exitCode=0 Nov 23 09:09:04 crc kubenswrapper[4988]: I1123 09:09:04.037701 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerDied","Data":"c27332734a445bdc5445d41a43501e32a416376a19d8172d79b0d77473c4018b"} Nov 23 09:09:04 crc kubenswrapper[4988]: I1123 09:09:04.037730 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerStarted","Data":"0a590ee38903a1d4804206317ba9bfd600572fd99855b0087891f4f07319f035"} Nov 23 09:09:05 crc kubenswrapper[4988]: I1123 09:09:05.051780 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerStarted","Data":"3f8c9148e3104a375a2144582bf060aca4ebb4a9a8ab8bc75f61652c7c559c30"} Nov 23 09:09:10 crc kubenswrapper[4988]: I1123 09:09:10.110808 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerID="3f8c9148e3104a375a2144582bf060aca4ebb4a9a8ab8bc75f61652c7c559c30" exitCode=0 Nov 23 09:09:10 crc kubenswrapper[4988]: I1123 09:09:10.111003 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerDied","Data":"3f8c9148e3104a375a2144582bf060aca4ebb4a9a8ab8bc75f61652c7c559c30"} Nov 23 09:09:11 crc kubenswrapper[4988]: I1123 09:09:11.124786 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerStarted","Data":"b02718e3d30570ecbb0c493254ddf68dae45b51832aa5790f3071f98edcb3bec"} Nov 23 09:09:11 crc kubenswrapper[4988]: I1123 09:09:11.147761 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gc89r" podStartSLOduration=2.6467566160000002 podStartE2EDuration="9.147739782s" podCreationTimestamp="2025-11-23 09:09:02 +0000 UTC" firstStartedPulling="2025-11-23 09:09:04.039740016 +0000 UTC m=+8596.348252779" lastFinishedPulling="2025-11-23 09:09:10.540723172 +0000 UTC m=+8602.849235945" observedRunningTime="2025-11-23 09:09:11.141244352 +0000 UTC m=+8603.449757135" watchObservedRunningTime="2025-11-23 09:09:11.147739782 +0000 UTC m=+8603.456252555" Nov 23 09:09:13 crc kubenswrapper[4988]: I1123 09:09:13.034018 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:13 crc kubenswrapper[4988]: I1123 09:09:13.034404 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:14 crc kubenswrapper[4988]: I1123 09:09:14.092947 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gc89r" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" probeResult="failure" output=< Nov 23 09:09:14 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:09:14 crc kubenswrapper[4988]: > Nov 23 09:09:24 crc kubenswrapper[4988]: I1123 09:09:24.100093 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gc89r" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" probeResult="failure" output=< Nov 23 09:09:24 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:09:24 crc kubenswrapper[4988]: > Nov 23 09:09:25 crc kubenswrapper[4988]: I1123 09:09:25.262962 4988 generic.go:334] "Generic (PLEG): container finished" podID="890a5459-3557-40c1-a1fc-e44689e6525d" containerID="df00c2d9484377b5f5656e1a2f72bb17fb2a9c6a2c08f5a89002f934408ebd0a" exitCode=0 Nov 23 09:09:25 crc kubenswrapper[4988]: I1123 09:09:25.263016 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" event={"ID":"890a5459-3557-40c1-a1fc-e44689e6525d","Type":"ContainerDied","Data":"df00c2d9484377b5f5656e1a2f72bb17fb2a9c6a2c08f5a89002f934408ebd0a"} Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.755264 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.909307 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw7b7\" (UniqueName: \"kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7\") pod \"890a5459-3557-40c1-a1fc-e44689e6525d\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.909556 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key\") pod \"890a5459-3557-40c1-a1fc-e44689e6525d\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.910825 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory\") pod \"890a5459-3557-40c1-a1fc-e44689e6525d\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.910894 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0\") pod \"890a5459-3557-40c1-a1fc-e44689e6525d\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.910946 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle\") pod \"890a5459-3557-40c1-a1fc-e44689e6525d\" (UID: \"890a5459-3557-40c1-a1fc-e44689e6525d\") " Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.921056 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "890a5459-3557-40c1-a1fc-e44689e6525d" (UID: "890a5459-3557-40c1-a1fc-e44689e6525d"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.921270 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7" (OuterVolumeSpecName: "kube-api-access-kw7b7") pod "890a5459-3557-40c1-a1fc-e44689e6525d" (UID: "890a5459-3557-40c1-a1fc-e44689e6525d"). InnerVolumeSpecName "kube-api-access-kw7b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.946336 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "890a5459-3557-40c1-a1fc-e44689e6525d" (UID: "890a5459-3557-40c1-a1fc-e44689e6525d"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.947110 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "890a5459-3557-40c1-a1fc-e44689e6525d" (UID: "890a5459-3557-40c1-a1fc-e44689e6525d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:26 crc kubenswrapper[4988]: I1123 09:09:26.950063 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory" (OuterVolumeSpecName: "inventory") pod "890a5459-3557-40c1-a1fc-e44689e6525d" (UID: "890a5459-3557-40c1-a1fc-e44689e6525d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.014155 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.014187 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.014211 4988 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.014222 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw7b7\" (UniqueName: \"kubernetes.io/projected/890a5459-3557-40c1-a1fc-e44689e6525d-kube-api-access-kw7b7\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.014231 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/890a5459-3557-40c1-a1fc-e44689e6525d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.283949 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" event={"ID":"890a5459-3557-40c1-a1fc-e44689e6525d","Type":"ContainerDied","Data":"541236f5b2be4721381fb1d83e46747ed51e76cd58db704db221d4e276a54c4a"} Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.284007 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541236f5b2be4721381fb1d83e46747ed51e76cd58db704db221d4e276a54c4a" Nov 23 09:09:27 crc kubenswrapper[4988]: I1123 09:09:27.284025 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-nrd2w" Nov 23 09:09:34 crc kubenswrapper[4988]: I1123 09:09:34.080115 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gc89r" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" probeResult="failure" output=< Nov 23 09:09:34 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:09:34 crc kubenswrapper[4988]: > Nov 23 09:09:40 crc kubenswrapper[4988]: I1123 09:09:40.552022 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:40 crc kubenswrapper[4988]: I1123 09:09:40.552824 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" containerName="nova-cell0-conductor-conductor" containerID="cri-o://b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" gracePeriod=30 Nov 23 09:09:40 crc kubenswrapper[4988]: I1123 09:09:40.579734 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:40 crc kubenswrapper[4988]: I1123 09:09:40.579983 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d3f67971-3923-4879-834e-66c6946e1b96" containerName="nova-cell1-conductor-conductor" containerID="cri-o://deb82dab5b7054b5f9ee8d945f16bbea50ecd99b6a57bdfcf01cc339e8299035" gracePeriod=30 Nov 23 09:09:40 crc kubenswrapper[4988]: E1123 09:09:40.614789 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:09:40 crc kubenswrapper[4988]: E1123 09:09:40.616240 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:09:40 crc kubenswrapper[4988]: E1123 09:09:40.617962 4988 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:09:40 crc kubenswrapper[4988]: E1123 09:09:40.618005 4988 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" containerName="nova-cell0-conductor-conductor" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.289253 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.289695 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-log" containerID="cri-o://7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc" gracePeriod=30 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.290030 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-api" containerID="cri-o://bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8" gracePeriod=30 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.381002 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.381251 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="836ddb4b-a468-45d0-9f73-48fcea741926" containerName="nova-scheduler-scheduler" containerID="cri-o://b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b" gracePeriod=30 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.431766 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.432045 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" containerID="cri-o://e10a0ba2d59b9aeea5747dddbdea52543daebe8314190332dba7d0a315bb5442" gracePeriod=30 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.432592 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" containerID="cri-o://6f8ed6fa46e43d3dc6c51ce0397f217429b3d65727a574a6f768b4b2c53ae827" gracePeriod=30 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.454099 4988 generic.go:334] "Generic (PLEG): container finished" podID="d3f67971-3923-4879-834e-66c6946e1b96" containerID="deb82dab5b7054b5f9ee8d945f16bbea50ecd99b6a57bdfcf01cc339e8299035" exitCode=0 Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.454164 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f67971-3923-4879-834e-66c6946e1b96","Type":"ContainerDied","Data":"deb82dab5b7054b5f9ee8d945f16bbea50ecd99b6a57bdfcf01cc339e8299035"} Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.615524 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.724717 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data\") pod \"d3f67971-3923-4879-834e-66c6946e1b96\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.725153 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whgq\" (UniqueName: \"kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq\") pod \"d3f67971-3923-4879-834e-66c6946e1b96\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.725179 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle\") pod \"d3f67971-3923-4879-834e-66c6946e1b96\" (UID: \"d3f67971-3923-4879-834e-66c6946e1b96\") " Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.733522 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq" (OuterVolumeSpecName: "kube-api-access-4whgq") pod "d3f67971-3923-4879-834e-66c6946e1b96" (UID: "d3f67971-3923-4879-834e-66c6946e1b96"). InnerVolumeSpecName "kube-api-access-4whgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.777452 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3f67971-3923-4879-834e-66c6946e1b96" (UID: "d3f67971-3923-4879-834e-66c6946e1b96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.783304 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data" (OuterVolumeSpecName: "config-data") pod "d3f67971-3923-4879-834e-66c6946e1b96" (UID: "d3f67971-3923-4879-834e-66c6946e1b96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.827149 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.827418 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whgq\" (UniqueName: \"kubernetes.io/projected/d3f67971-3923-4879-834e-66c6946e1b96-kube-api-access-4whgq\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:41 crc kubenswrapper[4988]: I1123 09:09:41.827488 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f67971-3923-4879-834e-66c6946e1b96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.465051 4988 generic.go:334] "Generic (PLEG): container finished" podID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerID="7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc" exitCode=143 Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.465397 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerDied","Data":"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc"} Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.467520 4988 generic.go:334] "Generic (PLEG): container finished" podID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerID="e10a0ba2d59b9aeea5747dddbdea52543daebe8314190332dba7d0a315bb5442" exitCode=143 Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.467595 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerDied","Data":"e10a0ba2d59b9aeea5747dddbdea52543daebe8314190332dba7d0a315bb5442"} Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.469229 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f67971-3923-4879-834e-66c6946e1b96","Type":"ContainerDied","Data":"059a327ca9f3810e451fe37d8b90e0e162e76d041fc5bab1794fa73a839d4924"} Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.469265 4988 scope.go:117] "RemoveContainer" containerID="deb82dab5b7054b5f9ee8d945f16bbea50ecd99b6a57bdfcf01cc339e8299035" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.469371 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.517240 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.535445 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.548580 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:42 crc kubenswrapper[4988]: E1123 09:09:42.549225 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f67971-3923-4879-834e-66c6946e1b96" containerName="nova-cell1-conductor-conductor" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.549341 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f67971-3923-4879-834e-66c6946e1b96" containerName="nova-cell1-conductor-conductor" Nov 23 09:09:42 crc kubenswrapper[4988]: E1123 09:09:42.549467 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="890a5459-3557-40c1-a1fc-e44689e6525d" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.549541 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="890a5459-3557-40c1-a1fc-e44689e6525d" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.549810 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="890a5459-3557-40c1-a1fc-e44689e6525d" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.549925 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f67971-3923-4879-834e-66c6946e1b96" containerName="nova-cell1-conductor-conductor" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.550734 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.553323 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.561021 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.642437 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.642571 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.642653 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2zdj\" (UniqueName: \"kubernetes.io/projected/8d7273a5-2352-4130-b94f-fe9b43fdd727-kube-api-access-x2zdj\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.744452 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.744571 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2zdj\" (UniqueName: \"kubernetes.io/projected/8d7273a5-2352-4130-b94f-fe9b43fdd727-kube-api-access-x2zdj\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.744744 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.749626 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.749939 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d7273a5-2352-4130-b94f-fe9b43fdd727-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.761698 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2zdj\" (UniqueName: \"kubernetes.io/projected/8d7273a5-2352-4130-b94f-fe9b43fdd727-kube-api-access-x2zdj\") pod \"nova-cell1-conductor-0\" (UID: \"8d7273a5-2352-4130-b94f-fe9b43fdd727\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:42 crc kubenswrapper[4988]: I1123 09:09:42.871697 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.127997 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.209398 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.358292 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.368566 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.446047 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.460874 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h2bd\" (UniqueName: \"kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd\") pod \"836ddb4b-a468-45d0-9f73-48fcea741926\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.460944 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data\") pod \"836ddb4b-a468-45d0-9f73-48fcea741926\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.461015 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle\") pod \"836ddb4b-a468-45d0-9f73-48fcea741926\" (UID: \"836ddb4b-a468-45d0-9f73-48fcea741926\") " Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.466848 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd" (OuterVolumeSpecName: "kube-api-access-6h2bd") pod "836ddb4b-a468-45d0-9f73-48fcea741926" (UID: "836ddb4b-a468-45d0-9f73-48fcea741926"). InnerVolumeSpecName "kube-api-access-6h2bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.484693 4988 generic.go:334] "Generic (PLEG): container finished" podID="836ddb4b-a468-45d0-9f73-48fcea741926" containerID="b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b" exitCode=0 Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.484849 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"836ddb4b-a468-45d0-9f73-48fcea741926","Type":"ContainerDied","Data":"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b"} Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.485102 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"836ddb4b-a468-45d0-9f73-48fcea741926","Type":"ContainerDied","Data":"84d1b607cea2f08f28d2c01f881e56a197598467ffe010b66691f4701a694655"} Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.485120 4988 scope.go:117] "RemoveContainer" containerID="b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.484935 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.491047 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8d7273a5-2352-4130-b94f-fe9b43fdd727","Type":"ContainerStarted","Data":"80ab8de7ac42b4770b9a974fc408046e2e1351ca1d09df6c8c1482a56c38e636"} Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.495591 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "836ddb4b-a468-45d0-9f73-48fcea741926" (UID: "836ddb4b-a468-45d0-9f73-48fcea741926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.502171 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data" (OuterVolumeSpecName: "config-data") pod "836ddb4b-a468-45d0-9f73-48fcea741926" (UID: "836ddb4b-a468-45d0-9f73-48fcea741926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.563146 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h2bd\" (UniqueName: \"kubernetes.io/projected/836ddb4b-a468-45d0-9f73-48fcea741926-kube-api-access-6h2bd\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.563177 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.563203 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836ddb4b-a468-45d0-9f73-48fcea741926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.638027 4988 scope.go:117] "RemoveContainer" containerID="b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b" Nov 23 09:09:43 crc kubenswrapper[4988]: E1123 09:09:43.638442 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b\": container with ID starting with b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b not found: ID does not exist" containerID="b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.638488 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b"} err="failed to get container status \"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b\": rpc error: code = NotFound desc = could not find container \"b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b\": container with ID starting with b00aaccb37a5eb6df84db27ed5153817e58787f6b421fd8f1fb70e9116ab476b not found: ID does not exist" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.815582 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.823602 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.846150 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:43 crc kubenswrapper[4988]: E1123 09:09:43.846560 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836ddb4b-a468-45d0-9f73-48fcea741926" containerName="nova-scheduler-scheduler" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.846579 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="836ddb4b-a468-45d0-9f73-48fcea741926" containerName="nova-scheduler-scheduler" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.846793 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="836ddb4b-a468-45d0-9f73-48fcea741926" containerName="nova-scheduler-scheduler" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.847452 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.849964 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.860121 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.984980 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8tc\" (UniqueName: \"kubernetes.io/projected/038b42e6-7d25-4e90-8acd-cc9d94fedb24-kube-api-access-5n8tc\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.985143 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-config-data\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:43 crc kubenswrapper[4988]: I1123 09:09:43.985190 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.087264 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-config-data\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.087370 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.087553 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8tc\" (UniqueName: \"kubernetes.io/projected/038b42e6-7d25-4e90-8acd-cc9d94fedb24-kube-api-access-5n8tc\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.094712 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-config-data\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.095390 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038b42e6-7d25-4e90-8acd-cc9d94fedb24-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.115728 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8tc\" (UniqueName: \"kubernetes.io/projected/038b42e6-7d25-4e90-8acd-cc9d94fedb24-kube-api-access-5n8tc\") pod \"nova-scheduler-0\" (UID: \"038b42e6-7d25-4e90-8acd-cc9d94fedb24\") " pod="openstack/nova-scheduler-0" Nov 23 09:09:44 crc kubenswrapper[4988]: I1123 09:09:44.176973 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.505004 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gc89r" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" containerID="cri-o://b02718e3d30570ecbb0c493254ddf68dae45b51832aa5790f3071f98edcb3bec" gracePeriod=2 Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.510529 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836ddb4b-a468-45d0-9f73-48fcea741926" path="/var/lib/kubelet/pods/836ddb4b-a468-45d0-9f73-48fcea741926/volumes" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.511050 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f67971-3923-4879-834e-66c6946e1b96" path="/var/lib/kubelet/pods/d3f67971-3923-4879-834e-66c6946e1b96/volumes" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.511998 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8d7273a5-2352-4130-b94f-fe9b43fdd727","Type":"ContainerStarted","Data":"2f6903d9699a1faa5a886abc780ed6f05f2c9f14a3cb50aa1941496a19a49246"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.512036 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.532306 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.53228562 podStartE2EDuration="2.53228562s" podCreationTimestamp="2025-11-23 09:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:44.520820483 +0000 UTC m=+8636.829333246" watchObservedRunningTime="2025-11-23 09:09:44.53228562 +0000 UTC m=+8636.840798383" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.684391 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:09:45 crc kubenswrapper[4988]: W1123 09:09:44.753782 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038b42e6_7d25_4e90_8acd_cc9d94fedb24.slice/crio-e1aabcb9ad421c29b42486dd3ec21cb57b12af531e568eed98e38a926e7acdb3 WatchSource:0}: Error finding container e1aabcb9ad421c29b42486dd3ec21cb57b12af531e568eed98e38a926e7acdb3: Status 404 returned error can't find the container with id e1aabcb9ad421c29b42486dd3ec21cb57b12af531e568eed98e38a926e7acdb3 Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.863407 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": read tcp 10.217.0.2:41608->10.217.1.88:8775: read: connection reset by peer" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:44.863662 4988 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.88:8775/\": read tcp 10.217.0.2:41618->10.217.1.88:8775: read: connection reset by peer" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.517559 4988 generic.go:334] "Generic (PLEG): container finished" podID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerID="b02718e3d30570ecbb0c493254ddf68dae45b51832aa5790f3071f98edcb3bec" exitCode=0 Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.517622 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerDied","Data":"b02718e3d30570ecbb0c493254ddf68dae45b51832aa5790f3071f98edcb3bec"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.517653 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gc89r" event={"ID":"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1","Type":"ContainerDied","Data":"0a590ee38903a1d4804206317ba9bfd600572fd99855b0087891f4f07319f035"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.517668 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a590ee38903a1d4804206317ba9bfd600572fd99855b0087891f4f07319f035" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.519421 4988 generic.go:334] "Generic (PLEG): container finished" podID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerID="6f8ed6fa46e43d3dc6c51ce0397f217429b3d65727a574a6f768b4b2c53ae827" exitCode=0 Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.519466 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerDied","Data":"6f8ed6fa46e43d3dc6c51ce0397f217429b3d65727a574a6f768b4b2c53ae827"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.519485 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81bbc54c-288e-4ec1-a085-78e0ddf18d2d","Type":"ContainerDied","Data":"dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.519497 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd336c983ee1a38aff12dcaa693c41bb5e9e56f812931dac753910807ad839c6" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.520851 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"038b42e6-7d25-4e90-8acd-cc9d94fedb24","Type":"ContainerStarted","Data":"fd1758368474a25a35906909c23db105e555c88f6e459ae442e3944bbcb7b16c"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.520881 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"038b42e6-7d25-4e90-8acd-cc9d94fedb24","Type":"ContainerStarted","Data":"e1aabcb9ad421c29b42486dd3ec21cb57b12af531e568eed98e38a926e7acdb3"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.523436 4988 generic.go:334] "Generic (PLEG): container finished" podID="c062963f-c5e3-441c-b6a0-76fd001da005" containerID="b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" exitCode=0 Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.524079 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c062963f-c5e3-441c-b6a0-76fd001da005","Type":"ContainerDied","Data":"b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.524109 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c062963f-c5e3-441c-b6a0-76fd001da005","Type":"ContainerDied","Data":"d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834"} Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.524122 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7c4a4527736e53bee01a245796e3237ccfd22ddff5104d85717bd3f08fd8834" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.548138 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.548119454 podStartE2EDuration="2.548119454s" podCreationTimestamp="2025-11-23 09:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:45.542033602 +0000 UTC m=+8637.850546365" watchObservedRunningTime="2025-11-23 09:09:45.548119454 +0000 UTC m=+8637.856632207" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.555327 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.562359 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.600821 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.731776 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk5vs\" (UniqueName: \"kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs\") pod \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.731823 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs\") pod \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.731850 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities\") pod \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.731907 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle\") pod \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.731982 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmdqz\" (UniqueName: \"kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz\") pod \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732020 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") pod \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732055 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data\") pod \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732129 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data\") pod \"c062963f-c5e3-441c-b6a0-76fd001da005\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732187 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs\") pod \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\" (UID: \"81bbc54c-288e-4ec1-a085-78e0ddf18d2d\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732238 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle\") pod \"c062963f-c5e3-441c-b6a0-76fd001da005\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732264 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7dzq\" (UniqueName: \"kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq\") pod \"c062963f-c5e3-441c-b6a0-76fd001da005\" (UID: \"c062963f-c5e3-441c-b6a0-76fd001da005\") " Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732267 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs" (OuterVolumeSpecName: "logs") pod "81bbc54c-288e-4ec1-a085-78e0ddf18d2d" (UID: "81bbc54c-288e-4ec1-a085-78e0ddf18d2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.732693 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities" (OuterVolumeSpecName: "utilities") pod "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" (UID: "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.733140 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-logs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.733154 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.738637 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs" (OuterVolumeSpecName: "kube-api-access-lk5vs") pod "81bbc54c-288e-4ec1-a085-78e0ddf18d2d" (UID: "81bbc54c-288e-4ec1-a085-78e0ddf18d2d"). InnerVolumeSpecName "kube-api-access-lk5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.742455 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq" (OuterVolumeSpecName: "kube-api-access-v7dzq") pod "c062963f-c5e3-441c-b6a0-76fd001da005" (UID: "c062963f-c5e3-441c-b6a0-76fd001da005"). InnerVolumeSpecName "kube-api-access-v7dzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.744010 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz" (OuterVolumeSpecName: "kube-api-access-vmdqz") pod "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" (UID: "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1"). InnerVolumeSpecName "kube-api-access-vmdqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.771512 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data" (OuterVolumeSpecName: "config-data") pod "c062963f-c5e3-441c-b6a0-76fd001da005" (UID: "c062963f-c5e3-441c-b6a0-76fd001da005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.774374 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c062963f-c5e3-441c-b6a0-76fd001da005" (UID: "c062963f-c5e3-441c-b6a0-76fd001da005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.774875 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81bbc54c-288e-4ec1-a085-78e0ddf18d2d" (UID: "81bbc54c-288e-4ec1-a085-78e0ddf18d2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.782025 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data" (OuterVolumeSpecName: "config-data") pod "81bbc54c-288e-4ec1-a085-78e0ddf18d2d" (UID: "81bbc54c-288e-4ec1-a085-78e0ddf18d2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.791137 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "81bbc54c-288e-4ec1-a085-78e0ddf18d2d" (UID: "81bbc54c-288e-4ec1-a085-78e0ddf18d2d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.835462 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" (UID: "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.837788 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") pod \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\" (UID: \"4a9e394a-e173-45ae-bc39-cc2aebf2e3d1\") " Nov 23 09:09:45 crc kubenswrapper[4988]: W1123 09:09:45.837957 4988 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1/volumes/kubernetes.io~empty-dir/catalog-content Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.837985 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" (UID: "4a9e394a-e173-45ae-bc39-cc2aebf2e3d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839614 4988 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839643 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839658 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7dzq\" (UniqueName: \"kubernetes.io/projected/c062963f-c5e3-441c-b6a0-76fd001da005-kube-api-access-v7dzq\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839682 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk5vs\" (UniqueName: \"kubernetes.io/projected/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-kube-api-access-lk5vs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839694 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839706 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmdqz\" (UniqueName: \"kubernetes.io/projected/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-kube-api-access-vmdqz\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839718 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839761 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bbc54c-288e-4ec1-a085-78e0ddf18d2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:45 crc kubenswrapper[4988]: I1123 09:09:45.839774 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c062963f-c5e3-441c-b6a0-76fd001da005-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.140365 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.253737 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.254372 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.254604 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.254686 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.254731 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.254778 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgfk\" (UniqueName: \"kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk\") pod \"a9c1d535-02d9-4462-860f-f48bb5ba1454\" (UID: \"a9c1d535-02d9-4462-860f-f48bb5ba1454\") " Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.256585 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs" (OuterVolumeSpecName: "logs") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.263620 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk" (OuterVolumeSpecName: "kube-api-access-9jgfk") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "kube-api-access-9jgfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.298439 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.304293 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data" (OuterVolumeSpecName: "config-data") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.358537 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgfk\" (UniqueName: \"kubernetes.io/projected/a9c1d535-02d9-4462-860f-f48bb5ba1454-kube-api-access-9jgfk\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.358578 4988 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.358590 4988 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9c1d535-02d9-4462-860f-f48bb5ba1454-logs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.358601 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.365731 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.388513 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a9c1d535-02d9-4462-860f-f48bb5ba1454" (UID: "a9c1d535-02d9-4462-860f-f48bb5ba1454"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.460002 4988 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.460040 4988 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9c1d535-02d9-4462-860f-f48bb5ba1454-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.533960 4988 generic.go:334] "Generic (PLEG): container finished" podID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerID="bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8" exitCode=0 Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534053 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534128 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerDied","Data":"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8"} Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534261 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a9c1d535-02d9-4462-860f-f48bb5ba1454","Type":"ContainerDied","Data":"40be55423c5b411fed78d2bf1d8c0c2e10dfbfb64d003b9b26fd44d2890a3d10"} Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534267 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gc89r" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534290 4988 scope.go:117] "RemoveContainer" containerID="bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534303 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.534259 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.561170 4988 scope.go:117] "RemoveContainer" containerID="7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.592656 4988 scope.go:117] "RemoveContainer" containerID="bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.600135 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8\": container with ID starting with bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8 not found: ID does not exist" containerID="bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.600326 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8"} err="failed to get container status \"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8\": rpc error: code = NotFound desc = could not find container \"bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8\": container with ID starting with bca3d1ab453fafe1c0b22a4c0ebccb1e329b70a88a3ac4e05070f3036a780dd8 not found: ID does not exist" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.600406 4988 scope.go:117] "RemoveContainer" containerID="7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.600890 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc\": container with ID starting with 7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc not found: ID does not exist" containerID="7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.601002 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc"} err="failed to get container status \"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc\": rpc error: code = NotFound desc = could not find container \"7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc\": container with ID starting with 7ca6632bf31e4c933e94a62f025b081a82bd98f0b51cbcb5b4229dd8ddc6facc not found: ID does not exist" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.611407 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.626582 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gc89r"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.642928 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.645861 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.658306 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.665607 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.676842 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677307 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="extract-utilities" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677324 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="extract-utilities" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677335 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" containerName="nova-cell0-conductor-conductor" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677342 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" containerName="nova-cell0-conductor-conductor" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677362 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-log" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677368 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-log" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677392 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="extract-content" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677397 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="extract-content" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677409 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677415 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677430 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677436 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677444 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677450 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" Nov 23 09:09:46 crc kubenswrapper[4988]: E1123 09:09:46.677460 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-api" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677466 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-api" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677646 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-log" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677669 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-log" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677679 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" containerName="nova-metadata-metadata" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677687 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" containerName="nova-api-api" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677698 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" containerName="nova-cell0-conductor-conductor" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.677710 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" containerName="registry-server" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.678934 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.681432 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.681656 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.688276 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.697069 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.703097 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.707169 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.707418 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.707553 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.707289 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.715746 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.725998 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.735682 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.737274 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.739292 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.761128 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.770363 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g49s\" (UniqueName: \"kubernetes.io/projected/9f76eb04-e618-4e95-a091-9a4b1a4c6065-kube-api-access-7g49s\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.770540 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.770606 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.770668 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-config-data\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.770685 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f76eb04-e618-4e95-a091-9a4b1a4c6065-logs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872187 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g49s\" (UniqueName: \"kubernetes.io/projected/9f76eb04-e618-4e95-a091-9a4b1a4c6065-kube-api-access-7g49s\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872262 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872324 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872364 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slxnq\" (UniqueName: \"kubernetes.io/projected/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-kube-api-access-slxnq\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872403 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872443 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872484 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872542 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872604 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2edb10-f06f-4f09-8567-41e3c7893154-logs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872627 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttdp\" (UniqueName: \"kubernetes.io/projected/4c2edb10-f06f-4f09-8567-41e3c7893154-kube-api-access-xttdp\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872646 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-config-data\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872685 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f76eb04-e618-4e95-a091-9a4b1a4c6065-logs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872708 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.872726 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-config-data\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.873115 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f76eb04-e618-4e95-a091-9a4b1a4c6065-logs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.879644 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.879974 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-config-data\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.889146 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g49s\" (UniqueName: \"kubernetes.io/projected/9f76eb04-e618-4e95-a091-9a4b1a4c6065-kube-api-access-7g49s\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.899272 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f76eb04-e618-4e95-a091-9a4b1a4c6065-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f76eb04-e618-4e95-a091-9a4b1a4c6065\") " pod="openstack/nova-metadata-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974427 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974799 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974888 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2edb10-f06f-4f09-8567-41e3c7893154-logs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974907 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xttdp\" (UniqueName: \"kubernetes.io/projected/4c2edb10-f06f-4f09-8567-41e3c7893154-kube-api-access-xttdp\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974938 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.974979 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-config-data\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.975065 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.975097 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.975142 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slxnq\" (UniqueName: \"kubernetes.io/projected/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-kube-api-access-slxnq\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.979355 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.979467 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.979478 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.979717 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c2edb10-f06f-4f09-8567-41e3c7893154-logs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.981237 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.981474 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.981539 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2edb10-f06f-4f09-8567-41e3c7893154-config-data\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:46 crc kubenswrapper[4988]: I1123 09:09:46.994757 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slxnq\" (UniqueName: \"kubernetes.io/projected/ce731121-aae3-4ce2-90ae-29f9b5b5a40a-kube-api-access-slxnq\") pod \"nova-cell0-conductor-0\" (UID: \"ce731121-aae3-4ce2-90ae-29f9b5b5a40a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.007632 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xttdp\" (UniqueName: \"kubernetes.io/projected/4c2edb10-f06f-4f09-8567-41e3c7893154-kube-api-access-xttdp\") pod \"nova-api-0\" (UID: \"4c2edb10-f06f-4f09-8567-41e3c7893154\") " pod="openstack/nova-api-0" Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.014104 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.028825 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.063737 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.565016 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:09:47 crc kubenswrapper[4988]: W1123 09:09:47.574315 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f76eb04_e618_4e95_a091_9a4b1a4c6065.slice/crio-b3f255078f534c5a23ad9242a088637be4baf9d03a443bc55012270059c40242 WatchSource:0}: Error finding container b3f255078f534c5a23ad9242a088637be4baf9d03a443bc55012270059c40242: Status 404 returned error can't find the container with id b3f255078f534c5a23ad9242a088637be4baf9d03a443bc55012270059c40242 Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.658073 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:09:47 crc kubenswrapper[4988]: W1123 09:09:47.671298 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce731121_aae3_4ce2_90ae_29f9b5b5a40a.slice/crio-49de69898e0830e7cfd84b14be2bc117d7aec5653512c89a73d99ed8b222bc0e WatchSource:0}: Error finding container 49de69898e0830e7cfd84b14be2bc117d7aec5653512c89a73d99ed8b222bc0e: Status 404 returned error can't find the container with id 49de69898e0830e7cfd84b14be2bc117d7aec5653512c89a73d99ed8b222bc0e Nov 23 09:09:47 crc kubenswrapper[4988]: I1123 09:09:47.672415 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:09:47 crc kubenswrapper[4988]: W1123 09:09:47.673026 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2edb10_f06f_4f09_8567_41e3c7893154.slice/crio-c4fa21c2fc77dfeda090282e96f7c411b5e4409a8f362002971f19b4e6c79710 WatchSource:0}: Error finding container c4fa21c2fc77dfeda090282e96f7c411b5e4409a8f362002971f19b4e6c79710: Status 404 returned error can't find the container with id c4fa21c2fc77dfeda090282e96f7c411b5e4409a8f362002971f19b4e6c79710 Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.508236 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9e394a-e173-45ae-bc39-cc2aebf2e3d1" path="/var/lib/kubelet/pods/4a9e394a-e173-45ae-bc39-cc2aebf2e3d1/volumes" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.509382 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bbc54c-288e-4ec1-a085-78e0ddf18d2d" path="/var/lib/kubelet/pods/81bbc54c-288e-4ec1-a085-78e0ddf18d2d/volumes" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.510223 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c1d535-02d9-4462-860f-f48bb5ba1454" path="/var/lib/kubelet/pods/a9c1d535-02d9-4462-860f-f48bb5ba1454/volumes" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.511541 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c062963f-c5e3-441c-b6a0-76fd001da005" path="/var/lib/kubelet/pods/c062963f-c5e3-441c-b6a0-76fd001da005/volumes" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.559692 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ce731121-aae3-4ce2-90ae-29f9b5b5a40a","Type":"ContainerStarted","Data":"a5e021929734cef8515f08a869017bcbd86178798422049197372145a97818f4"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.560004 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ce731121-aae3-4ce2-90ae-29f9b5b5a40a","Type":"ContainerStarted","Data":"49de69898e0830e7cfd84b14be2bc117d7aec5653512c89a73d99ed8b222bc0e"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.561075 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.563176 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2edb10-f06f-4f09-8567-41e3c7893154","Type":"ContainerStarted","Data":"c7e68aac7196f0a67455a1faaf3b2c9f9781bf4dcb17db697000fb3a6c03e341"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.563209 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2edb10-f06f-4f09-8567-41e3c7893154","Type":"ContainerStarted","Data":"dc04931618e74c1d7cda553374e5f1b066cf2ce462463b63e39db0c84a3bf067"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.563221 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c2edb10-f06f-4f09-8567-41e3c7893154","Type":"ContainerStarted","Data":"c4fa21c2fc77dfeda090282e96f7c411b5e4409a8f362002971f19b4e6c79710"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.569942 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f76eb04-e618-4e95-a091-9a4b1a4c6065","Type":"ContainerStarted","Data":"bc0126c1f1c645e6f6f31cb9e0b84ac202763dbd46e43146d2d0b67ddf44e740"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.569987 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f76eb04-e618-4e95-a091-9a4b1a4c6065","Type":"ContainerStarted","Data":"41144e0be5c079af365061aca16f659a0ac001f1f16899b49575a2fc79b27181"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.570003 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f76eb04-e618-4e95-a091-9a4b1a4c6065","Type":"ContainerStarted","Data":"b3f255078f534c5a23ad9242a088637be4baf9d03a443bc55012270059c40242"} Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.583625 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.583608548 podStartE2EDuration="2.583608548s" podCreationTimestamp="2025-11-23 09:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:48.574036159 +0000 UTC m=+8640.882548942" watchObservedRunningTime="2025-11-23 09:09:48.583608548 +0000 UTC m=+8640.892121311" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.601121 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.601099116 podStartE2EDuration="2.601099116s" podCreationTimestamp="2025-11-23 09:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:48.59446247 +0000 UTC m=+8640.902975243" watchObservedRunningTime="2025-11-23 09:09:48.601099116 +0000 UTC m=+8640.909611889" Nov 23 09:09:48 crc kubenswrapper[4988]: I1123 09:09:48.622265 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.622245566 podStartE2EDuration="2.622245566s" podCreationTimestamp="2025-11-23 09:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:48.616349418 +0000 UTC m=+8640.924862191" watchObservedRunningTime="2025-11-23 09:09:48.622245566 +0000 UTC m=+8640.930758339" Nov 23 09:09:49 crc kubenswrapper[4988]: I1123 09:09:49.177948 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 09:09:52 crc kubenswrapper[4988]: I1123 09:09:52.015230 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 09:09:52 crc kubenswrapper[4988]: I1123 09:09:52.015730 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 09:09:52 crc kubenswrapper[4988]: I1123 09:09:52.097856 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 09:09:52 crc kubenswrapper[4988]: I1123 09:09:52.925508 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 09:09:54 crc kubenswrapper[4988]: I1123 09:09:54.177487 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 09:09:54 crc kubenswrapper[4988]: I1123 09:09:54.211760 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 09:09:54 crc kubenswrapper[4988]: I1123 09:09:54.673141 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 09:09:57 crc kubenswrapper[4988]: I1123 09:09:57.015019 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 09:09:57 crc kubenswrapper[4988]: I1123 09:09:57.015484 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 09:09:57 crc kubenswrapper[4988]: I1123 09:09:57.030320 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 09:09:57 crc kubenswrapper[4988]: I1123 09:09:57.030371 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 09:09:58 crc kubenswrapper[4988]: I1123 09:09:58.031420 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9f76eb04-e618-4e95-a091-9a4b1a4c6065" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 09:09:58 crc kubenswrapper[4988]: I1123 09:09:58.031434 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9f76eb04-e618-4e95-a091-9a4b1a4c6065" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.176:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 09:09:58 crc kubenswrapper[4988]: I1123 09:09:58.045405 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c2edb10-f06f-4f09-8567-41e3c7893154" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:09:58 crc kubenswrapper[4988]: I1123 09:09:58.045488 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4c2edb10-f06f-4f09-8567-41e3c7893154" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.177:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 09:10:04 crc kubenswrapper[4988]: I1123 09:10:04.151081 4988 scope.go:117] "RemoveContainer" containerID="e10a0ba2d59b9aeea5747dddbdea52543daebe8314190332dba7d0a315bb5442" Nov 23 09:10:04 crc kubenswrapper[4988]: I1123 09:10:04.176019 4988 scope.go:117] "RemoveContainer" containerID="6f8ed6fa46e43d3dc6c51ce0397f217429b3d65727a574a6f768b4b2c53ae827" Nov 23 09:10:04 crc kubenswrapper[4988]: I1123 09:10:04.202880 4988 scope.go:117] "RemoveContainer" containerID="b8b335d21268545dbc86ee407e743c702acf840a5177837a1b68585c6ee8203d" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.021211 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.022817 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.026735 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.037064 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.037610 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.044402 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.056153 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.792806 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.799753 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 09:10:07 crc kubenswrapper[4988]: I1123 09:10:07.806778 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.959732 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n"] Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.961185 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969085 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969139 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969210 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969376 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969435 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969448 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-9rg44" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.969504 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.979172 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n"] Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989632 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989687 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989713 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989736 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989759 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989830 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989855 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989891 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:08 crc kubenswrapper[4988]: I1123 09:10:08.989981 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm88m\" (UniqueName: \"kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.091719 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm88m\" (UniqueName: \"kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092034 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092159 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092285 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092386 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092458 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092583 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092659 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.092759 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.098795 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.098980 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.099668 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.104968 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.106338 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.106585 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.108309 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.108677 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.112723 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm88m\" (UniqueName: \"kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.291117 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.790256 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n"] Nov 23 09:10:09 crc kubenswrapper[4988]: I1123 09:10:09.818683 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" event={"ID":"ca476e09-7dd2-40e8-9904-330ae85a51e0","Type":"ContainerStarted","Data":"041159377c1ae777c53f56fdb105261ac8843d4438cfbabc41a84fb0cf9f9210"} Nov 23 09:10:10 crc kubenswrapper[4988]: I1123 09:10:10.848209 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" event={"ID":"ca476e09-7dd2-40e8-9904-330ae85a51e0","Type":"ContainerStarted","Data":"0bbfd609b4fe68808199c55310f66035336e73ca9cbe97c25010e2f7975e11e1"} Nov 23 09:10:10 crc kubenswrapper[4988]: I1123 09:10:10.880130 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" podStartSLOduration=2.459740523 podStartE2EDuration="2.880109943s" podCreationTimestamp="2025-11-23 09:10:08 +0000 UTC" firstStartedPulling="2025-11-23 09:10:09.797922488 +0000 UTC m=+8662.106435261" lastFinishedPulling="2025-11-23 09:10:10.218291918 +0000 UTC m=+8662.526804681" observedRunningTime="2025-11-23 09:10:10.869375614 +0000 UTC m=+8663.177888387" watchObservedRunningTime="2025-11-23 09:10:10.880109943 +0000 UTC m=+8663.188622696" Nov 23 09:10:21 crc kubenswrapper[4988]: I1123 09:10:21.672008 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:10:21 crc kubenswrapper[4988]: I1123 09:10:21.672991 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:10:51 crc kubenswrapper[4988]: I1123 09:10:51.672401 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:10:51 crc kubenswrapper[4988]: I1123 09:10:51.673044 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:11:21 crc kubenswrapper[4988]: I1123 09:11:21.672424 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:11:21 crc kubenswrapper[4988]: I1123 09:11:21.673283 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:11:21 crc kubenswrapper[4988]: I1123 09:11:21.673378 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:11:21 crc kubenswrapper[4988]: I1123 09:11:21.674696 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:11:21 crc kubenswrapper[4988]: I1123 09:11:21.674853 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" gracePeriod=600 Nov 23 09:11:21 crc kubenswrapper[4988]: E1123 09:11:21.798334 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:11:22 crc kubenswrapper[4988]: I1123 09:11:22.645530 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" exitCode=0 Nov 23 09:11:22 crc kubenswrapper[4988]: I1123 09:11:22.645574 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66"} Nov 23 09:11:22 crc kubenswrapper[4988]: I1123 09:11:22.645607 4988 scope.go:117] "RemoveContainer" containerID="e80484a673873fec9f4261c1283a0729952a52cb05e1b10f224bb6163ed3458e" Nov 23 09:11:22 crc kubenswrapper[4988]: I1123 09:11:22.646689 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:11:22 crc kubenswrapper[4988]: E1123 09:11:22.647146 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:11:33 crc kubenswrapper[4988]: I1123 09:11:33.498916 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:11:33 crc kubenswrapper[4988]: E1123 09:11:33.499610 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:11:48 crc kubenswrapper[4988]: I1123 09:11:48.512653 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:11:48 crc kubenswrapper[4988]: E1123 09:11:48.514387 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:11:59 crc kubenswrapper[4988]: I1123 09:11:59.496528 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:11:59 crc kubenswrapper[4988]: E1123 09:11:59.497586 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.362862 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.366267 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.378504 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.540506 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rghx\" (UniqueName: \"kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.540606 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.540635 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.642512 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.643279 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.643220 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.643582 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.644794 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rghx\" (UniqueName: \"kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.668451 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rghx\" (UniqueName: \"kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx\") pod \"certified-operators-7s49g\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:09 crc kubenswrapper[4988]: I1123 09:12:09.704280 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:10 crc kubenswrapper[4988]: I1123 09:12:10.335396 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:10 crc kubenswrapper[4988]: I1123 09:12:10.496094 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:12:10 crc kubenswrapper[4988]: E1123 09:12:10.496363 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:12:11 crc kubenswrapper[4988]: I1123 09:12:11.338103 4988 generic.go:334] "Generic (PLEG): container finished" podID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerID="2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc" exitCode=0 Nov 23 09:12:11 crc kubenswrapper[4988]: I1123 09:12:11.338233 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerDied","Data":"2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc"} Nov 23 09:12:11 crc kubenswrapper[4988]: I1123 09:12:11.338541 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerStarted","Data":"c219d578659efbe48f74d09b43c983a8630a50e3190bf775c3e55931060f7399"} Nov 23 09:12:11 crc kubenswrapper[4988]: I1123 09:12:11.342292 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:12:12 crc kubenswrapper[4988]: I1123 09:12:12.352045 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerStarted","Data":"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592"} Nov 23 09:12:14 crc kubenswrapper[4988]: I1123 09:12:14.370596 4988 generic.go:334] "Generic (PLEG): container finished" podID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerID="e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592" exitCode=0 Nov 23 09:12:14 crc kubenswrapper[4988]: I1123 09:12:14.370651 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerDied","Data":"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592"} Nov 23 09:12:15 crc kubenswrapper[4988]: I1123 09:12:15.383656 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerStarted","Data":"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4"} Nov 23 09:12:15 crc kubenswrapper[4988]: I1123 09:12:15.407223 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7s49g" podStartSLOduration=2.983507713 podStartE2EDuration="6.407184252s" podCreationTimestamp="2025-11-23 09:12:09 +0000 UTC" firstStartedPulling="2025-11-23 09:12:11.341918965 +0000 UTC m=+8783.650431738" lastFinishedPulling="2025-11-23 09:12:14.765595494 +0000 UTC m=+8787.074108277" observedRunningTime="2025-11-23 09:12:15.397863378 +0000 UTC m=+8787.706376171" watchObservedRunningTime="2025-11-23 09:12:15.407184252 +0000 UTC m=+8787.715697015" Nov 23 09:12:19 crc kubenswrapper[4988]: I1123 09:12:19.705361 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:19 crc kubenswrapper[4988]: I1123 09:12:19.706479 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:20 crc kubenswrapper[4988]: I1123 09:12:20.788323 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:20 crc kubenswrapper[4988]: I1123 09:12:20.864238 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:21 crc kubenswrapper[4988]: I1123 09:12:21.068104 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:22 crc kubenswrapper[4988]: I1123 09:12:22.472660 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7s49g" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="registry-server" containerID="cri-o://610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4" gracePeriod=2 Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.079486 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.172638 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rghx\" (UniqueName: \"kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx\") pod \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.172811 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities\") pod \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.172986 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content\") pod \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\" (UID: \"6ff59599-9dd6-43ed-bde1-ded35dc90ecb\") " Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.173693 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities" (OuterVolumeSpecName: "utilities") pod "6ff59599-9dd6-43ed-bde1-ded35dc90ecb" (UID: "6ff59599-9dd6-43ed-bde1-ded35dc90ecb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.189301 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx" (OuterVolumeSpecName: "kube-api-access-5rghx") pod "6ff59599-9dd6-43ed-bde1-ded35dc90ecb" (UID: "6ff59599-9dd6-43ed-bde1-ded35dc90ecb"). InnerVolumeSpecName "kube-api-access-5rghx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.219327 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ff59599-9dd6-43ed-bde1-ded35dc90ecb" (UID: "6ff59599-9dd6-43ed-bde1-ded35dc90ecb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.275925 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.275965 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.275982 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rghx\" (UniqueName: \"kubernetes.io/projected/6ff59599-9dd6-43ed-bde1-ded35dc90ecb-kube-api-access-5rghx\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.486963 4988 generic.go:334] "Generic (PLEG): container finished" podID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerID="610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4" exitCode=0 Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.487017 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerDied","Data":"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4"} Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.487050 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7s49g" event={"ID":"6ff59599-9dd6-43ed-bde1-ded35dc90ecb","Type":"ContainerDied","Data":"c219d578659efbe48f74d09b43c983a8630a50e3190bf775c3e55931060f7399"} Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.487043 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7s49g" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.487068 4988 scope.go:117] "RemoveContainer" containerID="610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.496747 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:12:23 crc kubenswrapper[4988]: E1123 09:12:23.497455 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.532303 4988 scope.go:117] "RemoveContainer" containerID="e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.545766 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.554387 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7s49g"] Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.573433 4988 scope.go:117] "RemoveContainer" containerID="2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.611123 4988 scope.go:117] "RemoveContainer" containerID="610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4" Nov 23 09:12:23 crc kubenswrapper[4988]: E1123 09:12:23.613035 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4\": container with ID starting with 610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4 not found: ID does not exist" containerID="610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.613077 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4"} err="failed to get container status \"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4\": rpc error: code = NotFound desc = could not find container \"610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4\": container with ID starting with 610810ed35f65e729829900c560d878bc9bfe8487763fc4ed7024803d943d6b4 not found: ID does not exist" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.613104 4988 scope.go:117] "RemoveContainer" containerID="e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592" Nov 23 09:12:23 crc kubenswrapper[4988]: E1123 09:12:23.613513 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592\": container with ID starting with e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592 not found: ID does not exist" containerID="e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.613537 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592"} err="failed to get container status \"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592\": rpc error: code = NotFound desc = could not find container \"e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592\": container with ID starting with e6a7e43ed25d0ec20c2784257b9b8944dd50214b28cbec72c321df0354d19592 not found: ID does not exist" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.613553 4988 scope.go:117] "RemoveContainer" containerID="2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc" Nov 23 09:12:23 crc kubenswrapper[4988]: E1123 09:12:23.613960 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc\": container with ID starting with 2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc not found: ID does not exist" containerID="2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc" Nov 23 09:12:23 crc kubenswrapper[4988]: I1123 09:12:23.613982 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc"} err="failed to get container status \"2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc\": rpc error: code = NotFound desc = could not find container \"2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc\": container with ID starting with 2a3662bba56c2cf7b9f6e0eb8e9dd9c78b97d0f60a59948aaf6a429b3977effc not found: ID does not exist" Nov 23 09:12:24 crc kubenswrapper[4988]: I1123 09:12:24.507571 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" path="/var/lib/kubelet/pods/6ff59599-9dd6-43ed-bde1-ded35dc90ecb/volumes" Nov 23 09:12:38 crc kubenswrapper[4988]: I1123 09:12:38.503207 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:12:38 crc kubenswrapper[4988]: E1123 09:12:38.503992 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:12:49 crc kubenswrapper[4988]: I1123 09:12:49.497838 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:12:49 crc kubenswrapper[4988]: E1123 09:12:49.498821 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:00 crc kubenswrapper[4988]: I1123 09:13:00.498579 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:13:00 crc kubenswrapper[4988]: E1123 09:13:00.499630 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:12 crc kubenswrapper[4988]: I1123 09:13:12.496720 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:13:12 crc kubenswrapper[4988]: E1123 09:13:12.497365 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:25 crc kubenswrapper[4988]: I1123 09:13:25.496449 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:13:25 crc kubenswrapper[4988]: E1123 09:13:25.497366 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:39 crc kubenswrapper[4988]: I1123 09:13:39.496304 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:13:39 crc kubenswrapper[4988]: E1123 09:13:39.497127 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.307435 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:13:50 crc kubenswrapper[4988]: E1123 09:13:50.309639 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="extract-content" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.309672 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="extract-content" Nov 23 09:13:50 crc kubenswrapper[4988]: E1123 09:13:50.309720 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="extract-utilities" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.309738 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="extract-utilities" Nov 23 09:13:50 crc kubenswrapper[4988]: E1123 09:13:50.309818 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="registry-server" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.309837 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="registry-server" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.310374 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff59599-9dd6-43ed-bde1-ded35dc90ecb" containerName="registry-server" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.313966 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.320967 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.488035 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.488295 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zxkd\" (UniqueName: \"kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.488425 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.496238 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:13:50 crc kubenswrapper[4988]: E1123 09:13:50.496450 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.590025 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.590136 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zxkd\" (UniqueName: \"kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.590176 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.590811 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.591957 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.612535 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zxkd\" (UniqueName: \"kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd\") pod \"redhat-marketplace-kxgzh\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:50 crc kubenswrapper[4988]: I1123 09:13:50.652019 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:13:51 crc kubenswrapper[4988]: I1123 09:13:51.226690 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:13:51 crc kubenswrapper[4988]: I1123 09:13:51.749388 4988 generic.go:334] "Generic (PLEG): container finished" podID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerID="f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327" exitCode=0 Nov 23 09:13:51 crc kubenswrapper[4988]: I1123 09:13:51.749493 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerDied","Data":"f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327"} Nov 23 09:13:51 crc kubenswrapper[4988]: I1123 09:13:51.749819 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerStarted","Data":"be4126cbddfce50c2f8b18bdaeefb760e1a7d0532443330b09ecc963fe5834ab"} Nov 23 09:13:52 crc kubenswrapper[4988]: I1123 09:13:52.765943 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerStarted","Data":"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785"} Nov 23 09:13:53 crc kubenswrapper[4988]: I1123 09:13:53.777723 4988 generic.go:334] "Generic (PLEG): container finished" podID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerID="4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785" exitCode=0 Nov 23 09:13:53 crc kubenswrapper[4988]: I1123 09:13:53.777823 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerDied","Data":"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785"} Nov 23 09:13:54 crc kubenswrapper[4988]: I1123 09:13:54.790293 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerStarted","Data":"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe"} Nov 23 09:13:54 crc kubenswrapper[4988]: I1123 09:13:54.820938 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kxgzh" podStartSLOduration=2.228845619 podStartE2EDuration="4.820916694s" podCreationTimestamp="2025-11-23 09:13:50 +0000 UTC" firstStartedPulling="2025-11-23 09:13:51.751167083 +0000 UTC m=+8884.059679856" lastFinishedPulling="2025-11-23 09:13:54.343238168 +0000 UTC m=+8886.651750931" observedRunningTime="2025-11-23 09:13:54.808954005 +0000 UTC m=+8887.117466778" watchObservedRunningTime="2025-11-23 09:13:54.820916694 +0000 UTC m=+8887.129429467" Nov 23 09:14:00 crc kubenswrapper[4988]: I1123 09:14:00.653187 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:00 crc kubenswrapper[4988]: I1123 09:14:00.654004 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:00 crc kubenswrapper[4988]: I1123 09:14:00.731738 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:00 crc kubenswrapper[4988]: I1123 09:14:00.922982 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:00 crc kubenswrapper[4988]: I1123 09:14:00.977320 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:14:02 crc kubenswrapper[4988]: I1123 09:14:02.877786 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kxgzh" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="registry-server" containerID="cri-o://54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe" gracePeriod=2 Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.371807 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.488370 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zxkd\" (UniqueName: \"kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd\") pod \"56071199-65c7-4a02-ba7e-b8c6dba190d3\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.488611 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities\") pod \"56071199-65c7-4a02-ba7e-b8c6dba190d3\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.488718 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content\") pod \"56071199-65c7-4a02-ba7e-b8c6dba190d3\" (UID: \"56071199-65c7-4a02-ba7e-b8c6dba190d3\") " Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.490296 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities" (OuterVolumeSpecName: "utilities") pod "56071199-65c7-4a02-ba7e-b8c6dba190d3" (UID: "56071199-65c7-4a02-ba7e-b8c6dba190d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.497487 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:14:03 crc kubenswrapper[4988]: E1123 09:14:03.497905 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.501824 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd" (OuterVolumeSpecName: "kube-api-access-4zxkd") pod "56071199-65c7-4a02-ba7e-b8c6dba190d3" (UID: "56071199-65c7-4a02-ba7e-b8c6dba190d3"). InnerVolumeSpecName "kube-api-access-4zxkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.514678 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56071199-65c7-4a02-ba7e-b8c6dba190d3" (UID: "56071199-65c7-4a02-ba7e-b8c6dba190d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.591740 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.592209 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zxkd\" (UniqueName: \"kubernetes.io/projected/56071199-65c7-4a02-ba7e-b8c6dba190d3-kube-api-access-4zxkd\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.592225 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56071199-65c7-4a02-ba7e-b8c6dba190d3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.888226 4988 generic.go:334] "Generic (PLEG): container finished" podID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerID="54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe" exitCode=0 Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.888291 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerDied","Data":"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe"} Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.888331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kxgzh" event={"ID":"56071199-65c7-4a02-ba7e-b8c6dba190d3","Type":"ContainerDied","Data":"be4126cbddfce50c2f8b18bdaeefb760e1a7d0532443330b09ecc963fe5834ab"} Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.888354 4988 scope.go:117] "RemoveContainer" containerID="54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.888296 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kxgzh" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.918089 4988 scope.go:117] "RemoveContainer" containerID="4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785" Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.934313 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.946749 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kxgzh"] Nov 23 09:14:03 crc kubenswrapper[4988]: I1123 09:14:03.957913 4988 scope.go:117] "RemoveContainer" containerID="f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.011703 4988 scope.go:117] "RemoveContainer" containerID="54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe" Nov 23 09:14:04 crc kubenswrapper[4988]: E1123 09:14:04.012335 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe\": container with ID starting with 54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe not found: ID does not exist" containerID="54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.012365 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe"} err="failed to get container status \"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe\": rpc error: code = NotFound desc = could not find container \"54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe\": container with ID starting with 54c51c63b2d201b76591b71728c685202603896155325a8b875636d1d99586fe not found: ID does not exist" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.012386 4988 scope.go:117] "RemoveContainer" containerID="4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785" Nov 23 09:14:04 crc kubenswrapper[4988]: E1123 09:14:04.012878 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785\": container with ID starting with 4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785 not found: ID does not exist" containerID="4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.012953 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785"} err="failed to get container status \"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785\": rpc error: code = NotFound desc = could not find container \"4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785\": container with ID starting with 4da2742941810e37861e1d5eac938c2b127c126d35d2152cebf3c038b04c7785 not found: ID does not exist" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.013067 4988 scope.go:117] "RemoveContainer" containerID="f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327" Nov 23 09:14:04 crc kubenswrapper[4988]: E1123 09:14:04.013493 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327\": container with ID starting with f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327 not found: ID does not exist" containerID="f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.013547 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327"} err="failed to get container status \"f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327\": rpc error: code = NotFound desc = could not find container \"f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327\": container with ID starting with f6e97dfd0876c5234bde1d14773a21d8f0129ee09c4116ccdfff84ac7f1e2327 not found: ID does not exist" Nov 23 09:14:04 crc kubenswrapper[4988]: I1123 09:14:04.506634 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" path="/var/lib/kubelet/pods/56071199-65c7-4a02-ba7e-b8c6dba190d3/volumes" Nov 23 09:14:14 crc kubenswrapper[4988]: I1123 09:14:14.496609 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:14:14 crc kubenswrapper[4988]: E1123 09:14:14.497467 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:14:27 crc kubenswrapper[4988]: I1123 09:14:27.497029 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:14:27 crc kubenswrapper[4988]: E1123 09:14:27.498256 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:14:40 crc kubenswrapper[4988]: I1123 09:14:40.496963 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:14:40 crc kubenswrapper[4988]: E1123 09:14:40.497816 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:14:54 crc kubenswrapper[4988]: I1123 09:14:54.496628 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:14:54 crc kubenswrapper[4988]: E1123 09:14:54.497455 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.157487 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9"] Nov 23 09:15:00 crc kubenswrapper[4988]: E1123 09:15:00.158653 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="extract-content" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.158676 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="extract-content" Nov 23 09:15:00 crc kubenswrapper[4988]: E1123 09:15:00.158689 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="extract-utilities" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.158699 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="extract-utilities" Nov 23 09:15:00 crc kubenswrapper[4988]: E1123 09:15:00.158739 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="registry-server" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.158748 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="registry-server" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.159009 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="56071199-65c7-4a02-ba7e-b8c6dba190d3" containerName="registry-server" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.160069 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.163064 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.163258 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.171451 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9"] Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.252290 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.252977 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wtmh\" (UniqueName: \"kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.253053 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.354614 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wtmh\" (UniqueName: \"kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.354657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.354699 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.355564 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.361720 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.373034 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wtmh\" (UniqueName: \"kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh\") pod \"collect-profiles-29398155-4bkb9\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.492908 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:00 crc kubenswrapper[4988]: I1123 09:15:00.959345 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9"] Nov 23 09:15:01 crc kubenswrapper[4988]: I1123 09:15:01.594637 4988 generic.go:334] "Generic (PLEG): container finished" podID="e182ae7e-70bc-484d-a9ec-24ccbc743f1c" containerID="d173f3a1837a067a78ef58458f4bf6e1b2c5e43dae87589dc3bd327f229c3ebb" exitCode=0 Nov 23 09:15:01 crc kubenswrapper[4988]: I1123 09:15:01.594693 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" event={"ID":"e182ae7e-70bc-484d-a9ec-24ccbc743f1c","Type":"ContainerDied","Data":"d173f3a1837a067a78ef58458f4bf6e1b2c5e43dae87589dc3bd327f229c3ebb"} Nov 23 09:15:01 crc kubenswrapper[4988]: I1123 09:15:01.594920 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" event={"ID":"e182ae7e-70bc-484d-a9ec-24ccbc743f1c","Type":"ContainerStarted","Data":"eccf61ecd2372d7929bb532b42b198c3547bbb949e845d90f6adb66cbfec7eba"} Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.058678 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.210783 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wtmh\" (UniqueName: \"kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh\") pod \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.210921 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume\") pod \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.210950 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume\") pod \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\" (UID: \"e182ae7e-70bc-484d-a9ec-24ccbc743f1c\") " Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.211826 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e182ae7e-70bc-484d-a9ec-24ccbc743f1c" (UID: "e182ae7e-70bc-484d-a9ec-24ccbc743f1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.217962 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh" (OuterVolumeSpecName: "kube-api-access-6wtmh") pod "e182ae7e-70bc-484d-a9ec-24ccbc743f1c" (UID: "e182ae7e-70bc-484d-a9ec-24ccbc743f1c"). InnerVolumeSpecName "kube-api-access-6wtmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.222309 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e182ae7e-70bc-484d-a9ec-24ccbc743f1c" (UID: "e182ae7e-70bc-484d-a9ec-24ccbc743f1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.313816 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wtmh\" (UniqueName: \"kubernetes.io/projected/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-kube-api-access-6wtmh\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.313854 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.313867 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e182ae7e-70bc-484d-a9ec-24ccbc743f1c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.620039 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" event={"ID":"e182ae7e-70bc-484d-a9ec-24ccbc743f1c","Type":"ContainerDied","Data":"eccf61ecd2372d7929bb532b42b198c3547bbb949e845d90f6adb66cbfec7eba"} Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.620144 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccf61ecd2372d7929bb532b42b198c3547bbb949e845d90f6adb66cbfec7eba" Nov 23 09:15:03 crc kubenswrapper[4988]: I1123 09:15:03.620556 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-4bkb9" Nov 23 09:15:04 crc kubenswrapper[4988]: I1123 09:15:04.151600 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c"] Nov 23 09:15:04 crc kubenswrapper[4988]: I1123 09:15:04.160225 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-8cw8c"] Nov 23 09:15:04 crc kubenswrapper[4988]: I1123 09:15:04.471612 4988 scope.go:117] "RemoveContainer" containerID="c27332734a445bdc5445d41a43501e32a416376a19d8172d79b0d77473c4018b" Nov 23 09:15:04 crc kubenswrapper[4988]: I1123 09:15:04.510899 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d2f5e95-acd4-4aff-9ce0-28b5063654fb" path="/var/lib/kubelet/pods/5d2f5e95-acd4-4aff-9ce0-28b5063654fb/volumes" Nov 23 09:15:07 crc kubenswrapper[4988]: I1123 09:15:07.498590 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:15:07 crc kubenswrapper[4988]: E1123 09:15:07.502278 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:15:22 crc kubenswrapper[4988]: I1123 09:15:22.496300 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:15:22 crc kubenswrapper[4988]: E1123 09:15:22.497376 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:15:29 crc kubenswrapper[4988]: I1123 09:15:29.938796 4988 generic.go:334] "Generic (PLEG): container finished" podID="ca476e09-7dd2-40e8-9904-330ae85a51e0" containerID="0bbfd609b4fe68808199c55310f66035336e73ca9cbe97c25010e2f7975e11e1" exitCode=0 Nov 23 09:15:29 crc kubenswrapper[4988]: I1123 09:15:29.938872 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" event={"ID":"ca476e09-7dd2-40e8-9904-330ae85a51e0","Type":"ContainerDied","Data":"0bbfd609b4fe68808199c55310f66035336e73ca9cbe97c25010e2f7975e11e1"} Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.440279 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.603788 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.603917 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.603973 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604034 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm88m\" (UniqueName: \"kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604698 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604803 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604843 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604877 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.604945 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0\") pod \"ca476e09-7dd2-40e8-9904-330ae85a51e0\" (UID: \"ca476e09-7dd2-40e8-9904-330ae85a51e0\") " Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.610343 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.708612 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.710353 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m" (OuterVolumeSpecName: "kube-api-access-sm88m") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "kube-api-access-sm88m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.735118 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.737861 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.743465 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.746981 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory" (OuterVolumeSpecName: "inventory") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.747460 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.770274 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.771083 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "ca476e09-7dd2-40e8-9904-330ae85a51e0" (UID: "ca476e09-7dd2-40e8-9904-330ae85a51e0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812571 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812632 4988 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812653 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812671 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812689 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm88m\" (UniqueName: \"kubernetes.io/projected/ca476e09-7dd2-40e8-9904-330ae85a51e0-kube-api-access-sm88m\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812708 4988 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812725 4988 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.812745 4988 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca476e09-7dd2-40e8-9904-330ae85a51e0-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.965660 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" event={"ID":"ca476e09-7dd2-40e8-9904-330ae85a51e0","Type":"ContainerDied","Data":"041159377c1ae777c53f56fdb105261ac8843d4438cfbabc41a84fb0cf9f9210"} Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.965941 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="041159377c1ae777c53f56fdb105261ac8843d4438cfbabc41a84fb0cf9f9210" Nov 23 09:15:31 crc kubenswrapper[4988]: I1123 09:15:31.965764 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n" Nov 23 09:15:33 crc kubenswrapper[4988]: I1123 09:15:33.496793 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:15:33 crc kubenswrapper[4988]: E1123 09:15:33.497618 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:15:46 crc kubenswrapper[4988]: I1123 09:15:46.496699 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:15:46 crc kubenswrapper[4988]: E1123 09:15:46.497627 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:16:00 crc kubenswrapper[4988]: I1123 09:16:00.497898 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:16:00 crc kubenswrapper[4988]: E1123 09:16:00.499572 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:16:04 crc kubenswrapper[4988]: I1123 09:16:04.527335 4988 scope.go:117] "RemoveContainer" containerID="c310cc7b47f3c608230ac453ffa2dcd903f93d1de8bb8fdf139cd804f86df29a" Nov 23 09:16:04 crc kubenswrapper[4988]: I1123 09:16:04.573018 4988 scope.go:117] "RemoveContainer" containerID="3f8c9148e3104a375a2144582bf060aca4ebb4a9a8ab8bc75f61652c7c559c30" Nov 23 09:16:04 crc kubenswrapper[4988]: I1123 09:16:04.615511 4988 scope.go:117] "RemoveContainer" containerID="b02718e3d30570ecbb0c493254ddf68dae45b51832aa5790f3071f98edcb3bec" Nov 23 09:16:13 crc kubenswrapper[4988]: I1123 09:16:13.496350 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:16:13 crc kubenswrapper[4988]: E1123 09:16:13.496927 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:16:25 crc kubenswrapper[4988]: I1123 09:16:25.496769 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:16:26 crc kubenswrapper[4988]: I1123 09:16:26.537168 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404"} Nov 23 09:17:16 crc kubenswrapper[4988]: I1123 09:17:16.339752 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:17:16 crc kubenswrapper[4988]: I1123 09:17:16.340702 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="0bc35042-10af-472b-afcf-7408a2efc34d" containerName="adoption" containerID="cri-o://d592cca7a15c44ff3fa4e07b597a2b7e33f8e10ccd6a333bb2867fcde9056d8d" gracePeriod=30 Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.428033 4988 generic.go:334] "Generic (PLEG): container finished" podID="0bc35042-10af-472b-afcf-7408a2efc34d" containerID="d592cca7a15c44ff3fa4e07b597a2b7e33f8e10ccd6a333bb2867fcde9056d8d" exitCode=137 Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.428143 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0bc35042-10af-472b-afcf-7408a2efc34d","Type":"ContainerDied","Data":"d592cca7a15c44ff3fa4e07b597a2b7e33f8e10ccd6a333bb2867fcde9056d8d"} Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.874171 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.932561 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrc4k\" (UniqueName: \"kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k\") pod \"0bc35042-10af-472b-afcf-7408a2efc34d\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.933398 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") pod \"0bc35042-10af-472b-afcf-7408a2efc34d\" (UID: \"0bc35042-10af-472b-afcf-7408a2efc34d\") " Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.939288 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k" (OuterVolumeSpecName: "kube-api-access-jrc4k") pod "0bc35042-10af-472b-afcf-7408a2efc34d" (UID: "0bc35042-10af-472b-afcf-7408a2efc34d"). InnerVolumeSpecName "kube-api-access-jrc4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:17:46 crc kubenswrapper[4988]: I1123 09:17:46.956167 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8" (OuterVolumeSpecName: "mariadb-data") pod "0bc35042-10af-472b-afcf-7408a2efc34d" (UID: "0bc35042-10af-472b-afcf-7408a2efc34d"). InnerVolumeSpecName "pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.035518 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrc4k\" (UniqueName: \"kubernetes.io/projected/0bc35042-10af-472b-afcf-7408a2efc34d-kube-api-access-jrc4k\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.035573 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") on node \"crc\" " Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.088352 4988 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.088579 4988 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8") on node "crc" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.139257 4988 reconciler_common.go:293] "Volume detached for volume \"pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85894ad-372b-4736-be7f-ed97f68dc8d8\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.439380 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"0bc35042-10af-472b-afcf-7408a2efc34d","Type":"ContainerDied","Data":"569dcbb506ea526f1fede077245c923fc1c136be7ebc52a4e33d6e7562c17c2a"} Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.439434 4988 scope.go:117] "RemoveContainer" containerID="d592cca7a15c44ff3fa4e07b597a2b7e33f8e10ccd6a333bb2867fcde9056d8d" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.439483 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.487253 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:17:47 crc kubenswrapper[4988]: I1123 09:17:47.498082 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:17:48 crc kubenswrapper[4988]: I1123 09:17:48.074579 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:17:48 crc kubenswrapper[4988]: I1123 09:17:48.075156 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" containerName="adoption" containerID="cri-o://588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9" gracePeriod=30 Nov 23 09:17:48 crc kubenswrapper[4988]: I1123 09:17:48.510724 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc35042-10af-472b-afcf-7408a2efc34d" path="/var/lib/kubelet/pods/0bc35042-10af-472b-afcf-7408a2efc34d/volumes" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.659341 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.738439 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fgrp\" (UniqueName: \"kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp\") pod \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.739054 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert\") pod \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.740714 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") pod \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\" (UID: \"c25fdd76-b138-43e4-bcb2-34ce86c53e02\") " Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.749115 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "c25fdd76-b138-43e4-bcb2-34ce86c53e02" (UID: "c25fdd76-b138-43e4-bcb2-34ce86c53e02"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.750065 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp" (OuterVolumeSpecName: "kube-api-access-2fgrp") pod "c25fdd76-b138-43e4-bcb2-34ce86c53e02" (UID: "c25fdd76-b138-43e4-bcb2-34ce86c53e02"). InnerVolumeSpecName "kube-api-access-2fgrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.771451 4988 generic.go:334] "Generic (PLEG): container finished" podID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" containerID="588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9" exitCode=137 Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.771749 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c25fdd76-b138-43e4-bcb2-34ce86c53e02","Type":"ContainerDied","Data":"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9"} Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.771750 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.771846 4988 scope.go:117] "RemoveContainer" containerID="588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.771828 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c25fdd76-b138-43e4-bcb2-34ce86c53e02","Type":"ContainerDied","Data":"b29a6564dd1ffcc11839b7741a7b13d87448d49b77e1c837c66e4cc9d6ab8b43"} Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.778977 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811" (OuterVolumeSpecName: "ovn-data") pod "c25fdd76-b138-43e4-bcb2-34ce86c53e02" (UID: "c25fdd76-b138-43e4-bcb2-34ce86c53e02"). InnerVolumeSpecName "pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.845964 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") on node \"crc\" " Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.846344 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fgrp\" (UniqueName: \"kubernetes.io/projected/c25fdd76-b138-43e4-bcb2-34ce86c53e02-kube-api-access-2fgrp\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.846359 4988 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c25fdd76-b138-43e4-bcb2-34ce86c53e02-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.848808 4988 scope.go:117] "RemoveContainer" containerID="588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9" Nov 23 09:18:18 crc kubenswrapper[4988]: E1123 09:18:18.851396 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9\": container with ID starting with 588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9 not found: ID does not exist" containerID="588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.851434 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9"} err="failed to get container status \"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9\": rpc error: code = NotFound desc = could not find container \"588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9\": container with ID starting with 588d8d91a5dbfaa25bbf3d0734949f423c4f6c6711386e05f6c16e5bc3065cf9 not found: ID does not exist" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.870346 4988 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.870512 4988 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811") on node "crc" Nov 23 09:18:18 crc kubenswrapper[4988]: I1123 09:18:18.948143 4988 reconciler_common.go:293] "Volume detached for volume \"pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0732d6-61ea-4ec3-9c7a-56c5f8167811\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:19 crc kubenswrapper[4988]: I1123 09:18:19.122853 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:18:19 crc kubenswrapper[4988]: I1123 09:18:19.135220 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:18:20 crc kubenswrapper[4988]: I1123 09:18:20.517075 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" path="/var/lib/kubelet/pods/c25fdd76-b138-43e4-bcb2-34ce86c53e02/volumes" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.989066 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:18:38 crc kubenswrapper[4988]: E1123 09:18:38.990030 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e182ae7e-70bc-484d-a9ec-24ccbc743f1c" containerName="collect-profiles" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990046 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e182ae7e-70bc-484d-a9ec-24ccbc743f1c" containerName="collect-profiles" Nov 23 09:18:38 crc kubenswrapper[4988]: E1123 09:18:38.990057 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc35042-10af-472b-afcf-7408a2efc34d" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990064 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc35042-10af-472b-afcf-7408a2efc34d" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: E1123 09:18:38.990088 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca476e09-7dd2-40e8-9904-330ae85a51e0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990096 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca476e09-7dd2-40e8-9904-330ae85a51e0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:18:38 crc kubenswrapper[4988]: E1123 09:18:38.990110 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990116 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990318 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc35042-10af-472b-afcf-7408a2efc34d" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990332 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca476e09-7dd2-40e8-9904-330ae85a51e0" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990348 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25fdd76-b138-43e4-bcb2-34ce86c53e02" containerName="adoption" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.990357 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e182ae7e-70bc-484d-a9ec-24ccbc743f1c" containerName="collect-profiles" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.991144 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.993556 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.993866 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.993929 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 09:18:38 crc kubenswrapper[4988]: I1123 09:18:38.994290 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.011836 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.169675 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.169749 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.169828 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.169891 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrv99\" (UniqueName: \"kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.169966 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.170022 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.170081 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.170170 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.170293 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.271822 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.271877 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.271939 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.271995 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272019 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272051 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272080 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrv99\" (UniqueName: \"kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272112 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272139 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.272508 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.273010 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.273399 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.273946 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.273963 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.278721 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.279668 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.290983 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrv99\" (UniqueName: \"kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.315267 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.344359 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " pod="openstack/tempest-tests-tempest" Nov 23 09:18:39 crc kubenswrapper[4988]: I1123 09:18:39.640689 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:18:40 crc kubenswrapper[4988]: I1123 09:18:40.137180 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:18:40 crc kubenswrapper[4988]: I1123 09:18:40.140891 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:18:40 crc kubenswrapper[4988]: I1123 09:18:40.250491 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"11f8f692-04c1-427a-b77f-686e3f8409ed","Type":"ContainerStarted","Data":"90e6032fb12ff90c21aebb4c6acdbc7d4b034023bee0ecb29434c54f2518a634"} Nov 23 09:18:51 crc kubenswrapper[4988]: I1123 09:18:51.672965 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:18:51 crc kubenswrapper[4988]: I1123 09:18:51.673562 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:19:21 crc kubenswrapper[4988]: I1123 09:19:21.673099 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:19:21 crc kubenswrapper[4988]: I1123 09:19:21.673994 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:19:23 crc kubenswrapper[4988]: E1123 09:19:23.232323 4988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 09:19:23 crc kubenswrapper[4988]: E1123 09:19:23.232664 4988 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 09:19:23 crc kubenswrapper[4988]: E1123 09:19:23.232802 4988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrv99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(11f8f692-04c1-427a-b77f-686e3f8409ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 09:19:23 crc kubenswrapper[4988]: E1123 09:19:23.233979 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="11f8f692-04c1-427a-b77f-686e3f8409ed" Nov 23 09:19:23 crc kubenswrapper[4988]: E1123 09:19:23.708885 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/tempest-tests-tempest" podUID="11f8f692-04c1-427a-b77f-686e3f8409ed" Nov 23 09:19:38 crc kubenswrapper[4988]: I1123 09:19:38.737143 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 09:19:39 crc kubenswrapper[4988]: I1123 09:19:39.916629 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"11f8f692-04c1-427a-b77f-686e3f8409ed","Type":"ContainerStarted","Data":"f63010fa5ed55fa91abccb8f839409849a9f53036c1d25a6a1499701060d3928"} Nov 23 09:19:39 crc kubenswrapper[4988]: I1123 09:19:39.951378 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.35909942 podStartE2EDuration="1m2.951356237s" podCreationTimestamp="2025-11-23 09:18:37 +0000 UTC" firstStartedPulling="2025-11-23 09:18:40.140704566 +0000 UTC m=+9172.449217329" lastFinishedPulling="2025-11-23 09:19:38.732961373 +0000 UTC m=+9231.041474146" observedRunningTime="2025-11-23 09:19:39.942545506 +0000 UTC m=+9232.251058289" watchObservedRunningTime="2025-11-23 09:19:39.951356237 +0000 UTC m=+9232.259869000" Nov 23 09:19:51 crc kubenswrapper[4988]: I1123 09:19:51.671979 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:19:51 crc kubenswrapper[4988]: I1123 09:19:51.672550 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:19:51 crc kubenswrapper[4988]: I1123 09:19:51.672608 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:19:51 crc kubenswrapper[4988]: I1123 09:19:51.673389 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:19:51 crc kubenswrapper[4988]: I1123 09:19:51.673443 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404" gracePeriod=600 Nov 23 09:19:52 crc kubenswrapper[4988]: I1123 09:19:52.058132 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404" exitCode=0 Nov 23 09:19:52 crc kubenswrapper[4988]: I1123 09:19:52.058232 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404"} Nov 23 09:19:52 crc kubenswrapper[4988]: I1123 09:19:52.058594 4988 scope.go:117] "RemoveContainer" containerID="eba44a1a6583306657adb25c284c5605d62405cf63f93026a9d16f0a760bbd66" Nov 23 09:19:53 crc kubenswrapper[4988]: I1123 09:19:53.072366 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf"} Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.306544 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.310501 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.376748 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.433023 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.433074 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.433117 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98q4n\" (UniqueName: \"kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.535657 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.536346 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.536408 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.536461 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98q4n\" (UniqueName: \"kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.536964 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.560732 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98q4n\" (UniqueName: \"kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n\") pod \"redhat-operators-b47cl\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:36 crc kubenswrapper[4988]: I1123 09:20:36.628516 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:37 crc kubenswrapper[4988]: I1123 09:20:37.176358 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:20:37 crc kubenswrapper[4988]: W1123 09:20:37.181317 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd618f6_9162_4b8f_bd0b_8ca702c6a3be.slice/crio-0124e80dcd9868e296cb24015d61621dff6d68c4205cbde9bad6d280ae9351f0 WatchSource:0}: Error finding container 0124e80dcd9868e296cb24015d61621dff6d68c4205cbde9bad6d280ae9351f0: Status 404 returned error can't find the container with id 0124e80dcd9868e296cb24015d61621dff6d68c4205cbde9bad6d280ae9351f0 Nov 23 09:20:37 crc kubenswrapper[4988]: I1123 09:20:37.958151 4988 generic.go:334] "Generic (PLEG): container finished" podID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerID="f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37" exitCode=0 Nov 23 09:20:37 crc kubenswrapper[4988]: I1123 09:20:37.958312 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerDied","Data":"f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37"} Nov 23 09:20:37 crc kubenswrapper[4988]: I1123 09:20:37.958707 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerStarted","Data":"0124e80dcd9868e296cb24015d61621dff6d68c4205cbde9bad6d280ae9351f0"} Nov 23 09:20:38 crc kubenswrapper[4988]: I1123 09:20:38.971409 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerStarted","Data":"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d"} Nov 23 09:20:45 crc kubenswrapper[4988]: I1123 09:20:45.036054 4988 generic.go:334] "Generic (PLEG): container finished" podID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerID="1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d" exitCode=0 Nov 23 09:20:45 crc kubenswrapper[4988]: I1123 09:20:45.036145 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerDied","Data":"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d"} Nov 23 09:20:47 crc kubenswrapper[4988]: I1123 09:20:47.057589 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerStarted","Data":"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297"} Nov 23 09:20:47 crc kubenswrapper[4988]: I1123 09:20:47.087749 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b47cl" podStartSLOduration=3.615349239 podStartE2EDuration="11.08772471s" podCreationTimestamp="2025-11-23 09:20:36 +0000 UTC" firstStartedPulling="2025-11-23 09:20:37.962244494 +0000 UTC m=+9290.270757257" lastFinishedPulling="2025-11-23 09:20:45.434619965 +0000 UTC m=+9297.743132728" observedRunningTime="2025-11-23 09:20:47.078472269 +0000 UTC m=+9299.386985022" watchObservedRunningTime="2025-11-23 09:20:47.08772471 +0000 UTC m=+9299.396237473" Nov 23 09:20:56 crc kubenswrapper[4988]: I1123 09:20:56.628999 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:56 crc kubenswrapper[4988]: I1123 09:20:56.629704 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:57 crc kubenswrapper[4988]: I1123 09:20:57.172648 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:57 crc kubenswrapper[4988]: I1123 09:20:57.225840 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:20:57 crc kubenswrapper[4988]: I1123 09:20:57.410365 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:20:59 crc kubenswrapper[4988]: I1123 09:20:59.181715 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b47cl" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="registry-server" containerID="cri-o://0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297" gracePeriod=2 Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.122544 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.192593 4988 generic.go:334] "Generic (PLEG): container finished" podID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerID="0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297" exitCode=0 Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.192632 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerDied","Data":"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297"} Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.192657 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b47cl" event={"ID":"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be","Type":"ContainerDied","Data":"0124e80dcd9868e296cb24015d61621dff6d68c4205cbde9bad6d280ae9351f0"} Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.192657 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b47cl" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.192672 4988 scope.go:117] "RemoveContainer" containerID="0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.217344 4988 scope.go:117] "RemoveContainer" containerID="1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.235022 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content\") pod \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.235344 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98q4n\" (UniqueName: \"kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n\") pod \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.235390 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities\") pod \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\" (UID: \"7dd618f6-9162-4b8f-bd0b-8ca702c6a3be\") " Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.241501 4988 scope.go:117] "RemoveContainer" containerID="f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.243150 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities" (OuterVolumeSpecName: "utilities") pod "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" (UID: "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.243179 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n" (OuterVolumeSpecName: "kube-api-access-98q4n") pod "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" (UID: "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be"). InnerVolumeSpecName "kube-api-access-98q4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.338677 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98q4n\" (UniqueName: \"kubernetes.io/projected/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-kube-api-access-98q4n\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.338711 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.342804 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" (UID: "7dd618f6-9162-4b8f-bd0b-8ca702c6a3be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.350350 4988 scope.go:117] "RemoveContainer" containerID="0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297" Nov 23 09:21:00 crc kubenswrapper[4988]: E1123 09:21:00.350863 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297\": container with ID starting with 0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297 not found: ID does not exist" containerID="0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.350908 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297"} err="failed to get container status \"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297\": rpc error: code = NotFound desc = could not find container \"0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297\": container with ID starting with 0bf561dbc39802667e1e4de08bd0194181efae35f8081b2bd07067b8b8db5297 not found: ID does not exist" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.350935 4988 scope.go:117] "RemoveContainer" containerID="1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d" Nov 23 09:21:00 crc kubenswrapper[4988]: E1123 09:21:00.351374 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d\": container with ID starting with 1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d not found: ID does not exist" containerID="1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.351406 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d"} err="failed to get container status \"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d\": rpc error: code = NotFound desc = could not find container \"1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d\": container with ID starting with 1e09f8e7f74964d428a60be723ffbfe1bb8c6a36da49e540968b7ab2a0440f1d not found: ID does not exist" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.351428 4988 scope.go:117] "RemoveContainer" containerID="f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37" Nov 23 09:21:00 crc kubenswrapper[4988]: E1123 09:21:00.351778 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37\": container with ID starting with f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37 not found: ID does not exist" containerID="f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.351808 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37"} err="failed to get container status \"f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37\": rpc error: code = NotFound desc = could not find container \"f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37\": container with ID starting with f52a6def4f8ee8268738fc9b725b838e5f1186b809eee9273c6c37bfdae33f37 not found: ID does not exist" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.440400 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.532734 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:21:00 crc kubenswrapper[4988]: I1123 09:21:00.543352 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b47cl"] Nov 23 09:21:02 crc kubenswrapper[4988]: I1123 09:21:02.506935 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" path="/var/lib/kubelet/pods/7dd618f6-9162-4b8f-bd0b-8ca702c6a3be/volumes" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.347343 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:20 crc kubenswrapper[4988]: E1123 09:22:20.348386 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="extract-utilities" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.348400 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="extract-utilities" Nov 23 09:22:20 crc kubenswrapper[4988]: E1123 09:22:20.348420 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="registry-server" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.348428 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="registry-server" Nov 23 09:22:20 crc kubenswrapper[4988]: E1123 09:22:20.348443 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="extract-content" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.348451 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="extract-content" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.348685 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd618f6-9162-4b8f-bd0b-8ca702c6a3be" containerName="registry-server" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.350256 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.364458 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.488834 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.488919 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.488990 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k55xn\" (UniqueName: \"kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.591327 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.591399 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.591448 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k55xn\" (UniqueName: \"kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.591777 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.591990 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.632241 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k55xn\" (UniqueName: \"kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn\") pod \"certified-operators-4pc2h\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:20 crc kubenswrapper[4988]: I1123 09:22:20.674709 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.239964 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.340461 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.342971 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.361505 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.412225 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6hx\" (UniqueName: \"kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.412317 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.412494 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.514438 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.514781 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.514938 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l6hx\" (UniqueName: \"kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.515870 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.515949 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.533758 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l6hx\" (UniqueName: \"kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx\") pod \"community-operators-9slnf\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.671716 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.672285 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:22:21 crc kubenswrapper[4988]: I1123 09:22:21.672449 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:22:22 crc kubenswrapper[4988]: I1123 09:22:22.027080 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerID="9b78bd0a4cda2107a371c6f2c81eb43e9fa1b4f74083b40def06548e157c37b7" exitCode=0 Nov 23 09:22:22 crc kubenswrapper[4988]: I1123 09:22:22.027154 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerDied","Data":"9b78bd0a4cda2107a371c6f2c81eb43e9fa1b4f74083b40def06548e157c37b7"} Nov 23 09:22:22 crc kubenswrapper[4988]: I1123 09:22:22.027467 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerStarted","Data":"628aa2d6baadee1f5a3abe9a3dd9cbdec085c54f363b8ea0915446ef709c4032"} Nov 23 09:22:22 crc kubenswrapper[4988]: W1123 09:22:22.168427 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d71c6dd_d668_4afb_a5ce_791aa542a8ab.slice/crio-8aa34b5022a7e406b6a5b30a0b11ea2f98e3cb94de1e22aefd8d8c4586971c9e WatchSource:0}: Error finding container 8aa34b5022a7e406b6a5b30a0b11ea2f98e3cb94de1e22aefd8d8c4586971c9e: Status 404 returned error can't find the container with id 8aa34b5022a7e406b6a5b30a0b11ea2f98e3cb94de1e22aefd8d8c4586971c9e Nov 23 09:22:22 crc kubenswrapper[4988]: I1123 09:22:22.171435 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:23 crc kubenswrapper[4988]: I1123 09:22:23.038475 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerID="161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a" exitCode=0 Nov 23 09:22:23 crc kubenswrapper[4988]: I1123 09:22:23.038543 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerDied","Data":"161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a"} Nov 23 09:22:23 crc kubenswrapper[4988]: I1123 09:22:23.038759 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerStarted","Data":"8aa34b5022a7e406b6a5b30a0b11ea2f98e3cb94de1e22aefd8d8c4586971c9e"} Nov 23 09:22:24 crc kubenswrapper[4988]: I1123 09:22:24.049449 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerStarted","Data":"c625307de3e187be854d7c60813ea28fb48cd9f51fb579daff2cbcbf29b4c409"} Nov 23 09:22:24 crc kubenswrapper[4988]: I1123 09:22:24.051533 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerStarted","Data":"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9"} Nov 23 09:22:26 crc kubenswrapper[4988]: I1123 09:22:26.074707 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerID="c625307de3e187be854d7c60813ea28fb48cd9f51fb579daff2cbcbf29b4c409" exitCode=0 Nov 23 09:22:26 crc kubenswrapper[4988]: I1123 09:22:26.074836 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerDied","Data":"c625307de3e187be854d7c60813ea28fb48cd9f51fb579daff2cbcbf29b4c409"} Nov 23 09:22:27 crc kubenswrapper[4988]: I1123 09:22:27.103620 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerStarted","Data":"1389e89e4d820c1cf9c3239cdeead62148efdb733225f5af094791c350e7e101"} Nov 23 09:22:27 crc kubenswrapper[4988]: I1123 09:22:27.119011 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerID="65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9" exitCode=0 Nov 23 09:22:27 crc kubenswrapper[4988]: I1123 09:22:27.119113 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerDied","Data":"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9"} Nov 23 09:22:27 crc kubenswrapper[4988]: I1123 09:22:27.140364 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4pc2h" podStartSLOduration=2.679031133 podStartE2EDuration="7.140338702s" podCreationTimestamp="2025-11-23 09:22:20 +0000 UTC" firstStartedPulling="2025-11-23 09:22:22.029457815 +0000 UTC m=+9394.337970578" lastFinishedPulling="2025-11-23 09:22:26.490765384 +0000 UTC m=+9398.799278147" observedRunningTime="2025-11-23 09:22:27.12588489 +0000 UTC m=+9399.434397653" watchObservedRunningTime="2025-11-23 09:22:27.140338702 +0000 UTC m=+9399.448851465" Nov 23 09:22:29 crc kubenswrapper[4988]: I1123 09:22:29.141603 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerStarted","Data":"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902"} Nov 23 09:22:29 crc kubenswrapper[4988]: I1123 09:22:29.177789 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9slnf" podStartSLOduration=3.45252335 podStartE2EDuration="8.177769355s" podCreationTimestamp="2025-11-23 09:22:21 +0000 UTC" firstStartedPulling="2025-11-23 09:22:23.040759386 +0000 UTC m=+9395.349272149" lastFinishedPulling="2025-11-23 09:22:27.766005391 +0000 UTC m=+9400.074518154" observedRunningTime="2025-11-23 09:22:29.173430297 +0000 UTC m=+9401.481943060" watchObservedRunningTime="2025-11-23 09:22:29.177769355 +0000 UTC m=+9401.486282118" Nov 23 09:22:30 crc kubenswrapper[4988]: I1123 09:22:30.675103 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:30 crc kubenswrapper[4988]: I1123 09:22:30.675181 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:31 crc kubenswrapper[4988]: I1123 09:22:31.672447 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:31 crc kubenswrapper[4988]: I1123 09:22:31.672798 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:31 crc kubenswrapper[4988]: I1123 09:22:31.720676 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4pc2h" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="registry-server" probeResult="failure" output=< Nov 23 09:22:31 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:22:31 crc kubenswrapper[4988]: > Nov 23 09:22:32 crc kubenswrapper[4988]: I1123 09:22:32.718976 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9slnf" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="registry-server" probeResult="failure" output=< Nov 23 09:22:32 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:22:32 crc kubenswrapper[4988]: > Nov 23 09:22:40 crc kubenswrapper[4988]: I1123 09:22:40.732493 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:41 crc kubenswrapper[4988]: I1123 09:22:41.184513 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:41 crc kubenswrapper[4988]: I1123 09:22:41.242089 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:41 crc kubenswrapper[4988]: I1123 09:22:41.749818 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:41 crc kubenswrapper[4988]: I1123 09:22:41.802125 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:42 crc kubenswrapper[4988]: I1123 09:22:42.277342 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4pc2h" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="registry-server" containerID="cri-o://1389e89e4d820c1cf9c3239cdeead62148efdb733225f5af094791c350e7e101" gracePeriod=2 Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.288674 4988 generic.go:334] "Generic (PLEG): container finished" podID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerID="1389e89e4d820c1cf9c3239cdeead62148efdb733225f5af094791c350e7e101" exitCode=0 Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.288842 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerDied","Data":"1389e89e4d820c1cf9c3239cdeead62148efdb733225f5af094791c350e7e101"} Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.289313 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4pc2h" event={"ID":"3d56af63-4c27-4111-8b7b-befcf32100b2","Type":"ContainerDied","Data":"628aa2d6baadee1f5a3abe9a3dd9cbdec085c54f363b8ea0915446ef709c4032"} Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.289330 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628aa2d6baadee1f5a3abe9a3dd9cbdec085c54f363b8ea0915446ef709c4032" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.315356 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.376278 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.376560 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9slnf" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="registry-server" containerID="cri-o://23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902" gracePeriod=2 Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.483909 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities\") pod \"3d56af63-4c27-4111-8b7b-befcf32100b2\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.485037 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k55xn\" (UniqueName: \"kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn\") pod \"3d56af63-4c27-4111-8b7b-befcf32100b2\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.485181 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content\") pod \"3d56af63-4c27-4111-8b7b-befcf32100b2\" (UID: \"3d56af63-4c27-4111-8b7b-befcf32100b2\") " Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.485025 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities" (OuterVolumeSpecName: "utilities") pod "3d56af63-4c27-4111-8b7b-befcf32100b2" (UID: "3d56af63-4c27-4111-8b7b-befcf32100b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.487652 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.491653 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn" (OuterVolumeSpecName: "kube-api-access-k55xn") pod "3d56af63-4c27-4111-8b7b-befcf32100b2" (UID: "3d56af63-4c27-4111-8b7b-befcf32100b2"). InnerVolumeSpecName "kube-api-access-k55xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.569005 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d56af63-4c27-4111-8b7b-befcf32100b2" (UID: "3d56af63-4c27-4111-8b7b-befcf32100b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.589951 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d56af63-4c27-4111-8b7b-befcf32100b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:43 crc kubenswrapper[4988]: I1123 09:22:43.589985 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k55xn\" (UniqueName: \"kubernetes.io/projected/3d56af63-4c27-4111-8b7b-befcf32100b2-kube-api-access-k55xn\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.012369 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.098477 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities\") pod \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.098516 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l6hx\" (UniqueName: \"kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx\") pod \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.098560 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content\") pod \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\" (UID: \"1d71c6dd-d668-4afb-a5ce-791aa542a8ab\") " Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.103939 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities" (OuterVolumeSpecName: "utilities") pod "1d71c6dd-d668-4afb-a5ce-791aa542a8ab" (UID: "1d71c6dd-d668-4afb-a5ce-791aa542a8ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.108664 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx" (OuterVolumeSpecName: "kube-api-access-8l6hx") pod "1d71c6dd-d668-4afb-a5ce-791aa542a8ab" (UID: "1d71c6dd-d668-4afb-a5ce-791aa542a8ab"). InnerVolumeSpecName "kube-api-access-8l6hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.147163 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d71c6dd-d668-4afb-a5ce-791aa542a8ab" (UID: "1d71c6dd-d668-4afb-a5ce-791aa542a8ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.200351 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.200393 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l6hx\" (UniqueName: \"kubernetes.io/projected/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-kube-api-access-8l6hx\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.200415 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d71c6dd-d668-4afb-a5ce-791aa542a8ab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.300948 4988 generic.go:334] "Generic (PLEG): container finished" podID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerID="23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902" exitCode=0 Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.301007 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerDied","Data":"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902"} Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.301042 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4pc2h" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.301043 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9slnf" event={"ID":"1d71c6dd-d668-4afb-a5ce-791aa542a8ab","Type":"ContainerDied","Data":"8aa34b5022a7e406b6a5b30a0b11ea2f98e3cb94de1e22aefd8d8c4586971c9e"} Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.301080 4988 scope.go:117] "RemoveContainer" containerID="23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.301104 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9slnf" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.337871 4988 scope.go:117] "RemoveContainer" containerID="65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.355718 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.366619 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9slnf"] Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.379259 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.386715 4988 scope.go:117] "RemoveContainer" containerID="161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.392106 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4pc2h"] Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.416124 4988 scope.go:117] "RemoveContainer" containerID="23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902" Nov 23 09:22:44 crc kubenswrapper[4988]: E1123 09:22:44.416803 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902\": container with ID starting with 23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902 not found: ID does not exist" containerID="23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.416861 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902"} err="failed to get container status \"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902\": rpc error: code = NotFound desc = could not find container \"23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902\": container with ID starting with 23612c3a239df3f8a7d0d17bcd487425feffbef30db0ac550e0c1e18ecad0902 not found: ID does not exist" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.416898 4988 scope.go:117] "RemoveContainer" containerID="65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9" Nov 23 09:22:44 crc kubenswrapper[4988]: E1123 09:22:44.417448 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9\": container with ID starting with 65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9 not found: ID does not exist" containerID="65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.417483 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9"} err="failed to get container status \"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9\": rpc error: code = NotFound desc = could not find container \"65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9\": container with ID starting with 65c9f66a7f6528c5ebe7415e444d0ad9ba52fafb14818b38aa4c70ba9f0269c9 not found: ID does not exist" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.417507 4988 scope.go:117] "RemoveContainer" containerID="161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a" Nov 23 09:22:44 crc kubenswrapper[4988]: E1123 09:22:44.417745 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a\": container with ID starting with 161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a not found: ID does not exist" containerID="161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.417774 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a"} err="failed to get container status \"161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a\": rpc error: code = NotFound desc = could not find container \"161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a\": container with ID starting with 161079f9012dd7425da2a4619659079555617a268442cd23207c10249963662a not found: ID does not exist" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.508846 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" path="/var/lib/kubelet/pods/1d71c6dd-d668-4afb-a5ce-791aa542a8ab/volumes" Nov 23 09:22:44 crc kubenswrapper[4988]: I1123 09:22:44.509591 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" path="/var/lib/kubelet/pods/3d56af63-4c27-4111-8b7b-befcf32100b2/volumes" Nov 23 09:22:51 crc kubenswrapper[4988]: I1123 09:22:51.672768 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:22:51 crc kubenswrapper[4988]: I1123 09:22:51.673321 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:23:21 crc kubenswrapper[4988]: I1123 09:23:21.672382 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:23:21 crc kubenswrapper[4988]: I1123 09:23:21.672959 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:23:21 crc kubenswrapper[4988]: I1123 09:23:21.673067 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:23:21 crc kubenswrapper[4988]: I1123 09:23:21.673904 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:23:21 crc kubenswrapper[4988]: I1123 09:23:21.673963 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" gracePeriod=600 Nov 23 09:23:21 crc kubenswrapper[4988]: E1123 09:23:21.804854 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:23:22 crc kubenswrapper[4988]: I1123 09:23:22.686838 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" exitCode=0 Nov 23 09:23:22 crc kubenswrapper[4988]: I1123 09:23:22.686874 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf"} Nov 23 09:23:22 crc kubenswrapper[4988]: I1123 09:23:22.686950 4988 scope.go:117] "RemoveContainer" containerID="8ca73684f2916f382e3d653e28105ed2e11d7894bedfb368af65f1daa4978404" Nov 23 09:23:22 crc kubenswrapper[4988]: I1123 09:23:22.687929 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:23:22 crc kubenswrapper[4988]: E1123 09:23:22.688477 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:23:35 crc kubenswrapper[4988]: I1123 09:23:35.496323 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:23:35 crc kubenswrapper[4988]: E1123 09:23:35.497101 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:23:50 crc kubenswrapper[4988]: I1123 09:23:50.496707 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:23:50 crc kubenswrapper[4988]: E1123 09:23:50.497538 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:24:02 crc kubenswrapper[4988]: I1123 09:24:02.498250 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:24:02 crc kubenswrapper[4988]: E1123 09:24:02.499005 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:24:14 crc kubenswrapper[4988]: I1123 09:24:14.497156 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:24:14 crc kubenswrapper[4988]: E1123 09:24:14.498063 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:24:26 crc kubenswrapper[4988]: I1123 09:24:26.496818 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:24:26 crc kubenswrapper[4988]: E1123 09:24:26.497586 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:24:38 crc kubenswrapper[4988]: I1123 09:24:38.503978 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:24:38 crc kubenswrapper[4988]: E1123 09:24:38.504890 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:24:50 crc kubenswrapper[4988]: I1123 09:24:50.496330 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:24:50 crc kubenswrapper[4988]: E1123 09:24:50.497115 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:25:04 crc kubenswrapper[4988]: I1123 09:25:04.496559 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:25:04 crc kubenswrapper[4988]: E1123 09:25:04.497243 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:25:16 crc kubenswrapper[4988]: I1123 09:25:16.499454 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:25:16 crc kubenswrapper[4988]: E1123 09:25:16.500171 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:25:31 crc kubenswrapper[4988]: I1123 09:25:31.496471 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:25:31 crc kubenswrapper[4988]: E1123 09:25:31.497132 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:25:46 crc kubenswrapper[4988]: I1123 09:25:46.496015 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:25:46 crc kubenswrapper[4988]: E1123 09:25:46.496934 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:25:57 crc kubenswrapper[4988]: I1123 09:25:57.496407 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:25:57 crc kubenswrapper[4988]: E1123 09:25:57.497437 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.204793 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.205895 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="extract-utilities" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.205911 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="extract-utilities" Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.205936 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="extract-content" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.205943 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="extract-content" Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.205952 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.205959 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.206001 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.206009 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.206042 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="extract-utilities" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.206048 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="extract-utilities" Nov 23 09:26:10 crc kubenswrapper[4988]: E1123 09:26:10.206077 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="extract-content" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.206084 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="extract-content" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.206325 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d56af63-4c27-4111-8b7b-befcf32100b2" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.206340 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d71c6dd-d668-4afb-a5ce-791aa542a8ab" containerName="registry-server" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.208116 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.235520 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.259664 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9wh\" (UniqueName: \"kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.259804 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.259875 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.362094 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9wh\" (UniqueName: \"kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.362445 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.362600 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.364115 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.364939 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.390349 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9wh\" (UniqueName: \"kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh\") pod \"redhat-marketplace-tppcv\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:10 crc kubenswrapper[4988]: I1123 09:26:10.539719 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.043594 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.447662 4988 generic.go:334] "Generic (PLEG): container finished" podID="fae3536c-d6a6-44c6-a223-58536293aba4" containerID="7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f" exitCode=0 Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.447918 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerDied","Data":"7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f"} Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.447944 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerStarted","Data":"9b46c372659dcd96b8bd92f4dbf8a39473503d93123d739ebb15c6054b35bad1"} Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.449821 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:26:11 crc kubenswrapper[4988]: I1123 09:26:11.495807 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:26:11 crc kubenswrapper[4988]: E1123 09:26:11.496156 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:26:12 crc kubenswrapper[4988]: I1123 09:26:12.459038 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerStarted","Data":"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd"} Nov 23 09:26:13 crc kubenswrapper[4988]: I1123 09:26:13.471040 4988 generic.go:334] "Generic (PLEG): container finished" podID="fae3536c-d6a6-44c6-a223-58536293aba4" containerID="37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd" exitCode=0 Nov 23 09:26:13 crc kubenswrapper[4988]: I1123 09:26:13.471083 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerDied","Data":"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd"} Nov 23 09:26:14 crc kubenswrapper[4988]: I1123 09:26:14.481774 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerStarted","Data":"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07"} Nov 23 09:26:14 crc kubenswrapper[4988]: I1123 09:26:14.500270 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tppcv" podStartSLOduration=2.067814607 podStartE2EDuration="4.500250797s" podCreationTimestamp="2025-11-23 09:26:10 +0000 UTC" firstStartedPulling="2025-11-23 09:26:11.449563063 +0000 UTC m=+9623.758075826" lastFinishedPulling="2025-11-23 09:26:13.881999253 +0000 UTC m=+9626.190512016" observedRunningTime="2025-11-23 09:26:14.498416141 +0000 UTC m=+9626.806928904" watchObservedRunningTime="2025-11-23 09:26:14.500250797 +0000 UTC m=+9626.808763570" Nov 23 09:26:20 crc kubenswrapper[4988]: I1123 09:26:20.540631 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:20 crc kubenswrapper[4988]: I1123 09:26:20.541394 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:20 crc kubenswrapper[4988]: I1123 09:26:20.607945 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:21 crc kubenswrapper[4988]: I1123 09:26:21.696695 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:21 crc kubenswrapper[4988]: I1123 09:26:21.781083 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:23 crc kubenswrapper[4988]: I1123 09:26:23.567136 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tppcv" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="registry-server" containerID="cri-o://4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07" gracePeriod=2 Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.216034 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.396335 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9wh\" (UniqueName: \"kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh\") pod \"fae3536c-d6a6-44c6-a223-58536293aba4\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.396550 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content\") pod \"fae3536c-d6a6-44c6-a223-58536293aba4\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.396609 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities\") pod \"fae3536c-d6a6-44c6-a223-58536293aba4\" (UID: \"fae3536c-d6a6-44c6-a223-58536293aba4\") " Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.397393 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities" (OuterVolumeSpecName: "utilities") pod "fae3536c-d6a6-44c6-a223-58536293aba4" (UID: "fae3536c-d6a6-44c6-a223-58536293aba4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.405280 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh" (OuterVolumeSpecName: "kube-api-access-qd9wh") pod "fae3536c-d6a6-44c6-a223-58536293aba4" (UID: "fae3536c-d6a6-44c6-a223-58536293aba4"). InnerVolumeSpecName "kube-api-access-qd9wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.417113 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fae3536c-d6a6-44c6-a223-58536293aba4" (UID: "fae3536c-d6a6-44c6-a223-58536293aba4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.496311 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:26:24 crc kubenswrapper[4988]: E1123 09:26:24.496769 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.499092 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.499119 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9wh\" (UniqueName: \"kubernetes.io/projected/fae3536c-d6a6-44c6-a223-58536293aba4-kube-api-access-qd9wh\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.499128 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae3536c-d6a6-44c6-a223-58536293aba4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.577444 4988 generic.go:334] "Generic (PLEG): container finished" podID="fae3536c-d6a6-44c6-a223-58536293aba4" containerID="4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07" exitCode=0 Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.577667 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerDied","Data":"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07"} Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.577769 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tppcv" event={"ID":"fae3536c-d6a6-44c6-a223-58536293aba4","Type":"ContainerDied","Data":"9b46c372659dcd96b8bd92f4dbf8a39473503d93123d739ebb15c6054b35bad1"} Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.577777 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tppcv" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.577814 4988 scope.go:117] "RemoveContainer" containerID="4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.598213 4988 scope.go:117] "RemoveContainer" containerID="37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.600362 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.612109 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tppcv"] Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.617557 4988 scope.go:117] "RemoveContainer" containerID="7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.683698 4988 scope.go:117] "RemoveContainer" containerID="4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07" Nov 23 09:26:24 crc kubenswrapper[4988]: E1123 09:26:24.684124 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07\": container with ID starting with 4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07 not found: ID does not exist" containerID="4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.684165 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07"} err="failed to get container status \"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07\": rpc error: code = NotFound desc = could not find container \"4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07\": container with ID starting with 4936c1db3db2f71faedb513371b7121fc36bbcfcf4f65c4d3d02c1e465b85a07 not found: ID does not exist" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.684216 4988 scope.go:117] "RemoveContainer" containerID="37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd" Nov 23 09:26:24 crc kubenswrapper[4988]: E1123 09:26:24.684552 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd\": container with ID starting with 37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd not found: ID does not exist" containerID="37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.684590 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd"} err="failed to get container status \"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd\": rpc error: code = NotFound desc = could not find container \"37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd\": container with ID starting with 37c91955c217336c34a29e28387b40d08a6a32cc601791f7a3b2d0434da7c9bd not found: ID does not exist" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.684609 4988 scope.go:117] "RemoveContainer" containerID="7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f" Nov 23 09:26:24 crc kubenswrapper[4988]: E1123 09:26:24.684955 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f\": container with ID starting with 7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f not found: ID does not exist" containerID="7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f" Nov 23 09:26:24 crc kubenswrapper[4988]: I1123 09:26:24.685082 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f"} err="failed to get container status \"7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f\": rpc error: code = NotFound desc = could not find container \"7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f\": container with ID starting with 7150a113efdec11c8ea0b6826f6626d66d1252f64dd28418e2d33b423b789e3f not found: ID does not exist" Nov 23 09:26:26 crc kubenswrapper[4988]: I1123 09:26:26.508649 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" path="/var/lib/kubelet/pods/fae3536c-d6a6-44c6-a223-58536293aba4/volumes" Nov 23 09:26:39 crc kubenswrapper[4988]: I1123 09:26:39.496236 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:26:39 crc kubenswrapper[4988]: E1123 09:26:39.497035 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:26:50 crc kubenswrapper[4988]: I1123 09:26:50.496881 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:26:50 crc kubenswrapper[4988]: E1123 09:26:50.497731 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:27:02 crc kubenswrapper[4988]: I1123 09:27:02.496050 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:27:02 crc kubenswrapper[4988]: E1123 09:27:02.498084 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:27:13 crc kubenswrapper[4988]: I1123 09:27:13.498549 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:27:13 crc kubenswrapper[4988]: E1123 09:27:13.500218 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:27:27 crc kubenswrapper[4988]: I1123 09:27:27.496737 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:27:27 crc kubenswrapper[4988]: E1123 09:27:27.497738 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:27:38 crc kubenswrapper[4988]: I1123 09:27:38.507568 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:27:38 crc kubenswrapper[4988]: E1123 09:27:38.508085 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:27:53 crc kubenswrapper[4988]: I1123 09:27:53.496539 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:27:53 crc kubenswrapper[4988]: E1123 09:27:53.497304 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:28:04 crc kubenswrapper[4988]: I1123 09:28:04.497528 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:28:04 crc kubenswrapper[4988]: E1123 09:28:04.498262 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:28:19 crc kubenswrapper[4988]: I1123 09:28:19.496816 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:28:19 crc kubenswrapper[4988]: E1123 09:28:19.500372 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:28:23 crc kubenswrapper[4988]: I1123 09:28:23.425176 4988 scope.go:117] "RemoveContainer" containerID="9b78bd0a4cda2107a371c6f2c81eb43e9fa1b4f74083b40def06548e157c37b7" Nov 23 09:28:23 crc kubenswrapper[4988]: I1123 09:28:23.462545 4988 scope.go:117] "RemoveContainer" containerID="c625307de3e187be854d7c60813ea28fb48cd9f51fb579daff2cbcbf29b4c409" Nov 23 09:28:32 crc kubenswrapper[4988]: I1123 09:28:32.496952 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:28:32 crc kubenswrapper[4988]: I1123 09:28:32.850855 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9"} Nov 23 09:29:23 crc kubenswrapper[4988]: I1123 09:29:23.537375 4988 scope.go:117] "RemoveContainer" containerID="1389e89e4d820c1cf9c3239cdeead62148efdb733225f5af094791c350e7e101" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.186641 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw"] Nov 23 09:30:00 crc kubenswrapper[4988]: E1123 09:30:00.187954 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.187976 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[4988]: E1123 09:30:00.188036 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="extract-content" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.188047 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="extract-content" Nov 23 09:30:00 crc kubenswrapper[4988]: E1123 09:30:00.188088 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="extract-utilities" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.188101 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="extract-utilities" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.188420 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae3536c-d6a6-44c6-a223-58536293aba4" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.189616 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.191800 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.192418 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.201645 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw"] Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.213551 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.213610 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.213743 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6fm\" (UniqueName: \"kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.315547 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx6fm\" (UniqueName: \"kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.315863 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.315958 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.316730 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.321443 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.330063 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx6fm\" (UniqueName: \"kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm\") pod \"collect-profiles-29398170-nqrlw\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.523887 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:00 crc kubenswrapper[4988]: I1123 09:30:00.979210 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw"] Nov 23 09:30:00 crc kubenswrapper[4988]: W1123 09:30:00.989621 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bfc9378_bd01_418c_a72e_86e4b4a45fb6.slice/crio-774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083 WatchSource:0}: Error finding container 774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083: Status 404 returned error can't find the container with id 774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083 Nov 23 09:30:01 crc kubenswrapper[4988]: I1123 09:30:01.823968 4988 generic.go:334] "Generic (PLEG): container finished" podID="1bfc9378-bd01-418c-a72e-86e4b4a45fb6" containerID="3bec9bd996ee900b4de4e921e44ef9ba4fade8962f6ea58bc607edc36c195e01" exitCode=0 Nov 23 09:30:01 crc kubenswrapper[4988]: I1123 09:30:01.824083 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" event={"ID":"1bfc9378-bd01-418c-a72e-86e4b4a45fb6","Type":"ContainerDied","Data":"3bec9bd996ee900b4de4e921e44ef9ba4fade8962f6ea58bc607edc36c195e01"} Nov 23 09:30:01 crc kubenswrapper[4988]: I1123 09:30:01.824348 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" event={"ID":"1bfc9378-bd01-418c-a72e-86e4b4a45fb6","Type":"ContainerStarted","Data":"774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083"} Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.249212 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.388731 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx6fm\" (UniqueName: \"kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm\") pod \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.388806 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume\") pod \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.388847 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume\") pod \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\" (UID: \"1bfc9378-bd01-418c-a72e-86e4b4a45fb6\") " Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.389725 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume" (OuterVolumeSpecName: "config-volume") pod "1bfc9378-bd01-418c-a72e-86e4b4a45fb6" (UID: "1bfc9378-bd01-418c-a72e-86e4b4a45fb6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.398460 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm" (OuterVolumeSpecName: "kube-api-access-mx6fm") pod "1bfc9378-bd01-418c-a72e-86e4b4a45fb6" (UID: "1bfc9378-bd01-418c-a72e-86e4b4a45fb6"). InnerVolumeSpecName "kube-api-access-mx6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.404886 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1bfc9378-bd01-418c-a72e-86e4b4a45fb6" (UID: "1bfc9378-bd01-418c-a72e-86e4b4a45fb6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.491746 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx6fm\" (UniqueName: \"kubernetes.io/projected/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-kube-api-access-mx6fm\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.491801 4988 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.491814 4988 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bfc9378-bd01-418c-a72e-86e4b4a45fb6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.843481 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" event={"ID":"1bfc9378-bd01-418c-a72e-86e4b4a45fb6","Type":"ContainerDied","Data":"774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083"} Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.843537 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="774930e982a79ef4acce8499dc948bf0a0e488f72d685e93c967ae25fcd17083" Nov 23 09:30:03 crc kubenswrapper[4988]: I1123 09:30:03.843597 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-nqrlw" Nov 23 09:30:04 crc kubenswrapper[4988]: I1123 09:30:04.334421 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx"] Nov 23 09:30:04 crc kubenswrapper[4988]: I1123 09:30:04.344631 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-244cx"] Nov 23 09:30:04 crc kubenswrapper[4988]: I1123 09:30:04.511232 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ae8f69-6dc3-46d3-8a02-b04053381a7d" path="/var/lib/kubelet/pods/22ae8f69-6dc3-46d3-8a02-b04053381a7d/volumes" Nov 23 09:30:15 crc kubenswrapper[4988]: I1123 09:30:15.966214 4988 generic.go:334] "Generic (PLEG): container finished" podID="11f8f692-04c1-427a-b77f-686e3f8409ed" containerID="f63010fa5ed55fa91abccb8f839409849a9f53036c1d25a6a1499701060d3928" exitCode=0 Nov 23 09:30:15 crc kubenswrapper[4988]: I1123 09:30:15.966362 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"11f8f692-04c1-427a-b77f-686e3f8409ed","Type":"ContainerDied","Data":"f63010fa5ed55fa91abccb8f839409849a9f53036c1d25a6a1499701060d3928"} Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.418096 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506504 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506582 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506634 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506661 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506700 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506735 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506777 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506804 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.506859 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrv99\" (UniqueName: \"kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99\") pod \"11f8f692-04c1-427a-b77f-686e3f8409ed\" (UID: \"11f8f692-04c1-427a-b77f-686e3f8409ed\") " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.507879 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.509779 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data" (OuterVolumeSpecName: "config-data") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.514666 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.514872 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.516561 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99" (OuterVolumeSpecName: "kube-api-access-vrv99") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "kube-api-access-vrv99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.542678 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.543589 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.549503 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.600549 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "11f8f692-04c1-427a-b77f-686e3f8409ed" (UID: "11f8f692-04c1-427a-b77f-686e3f8409ed"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609742 4988 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609790 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609805 4988 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609814 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrv99\" (UniqueName: \"kubernetes.io/projected/11f8f692-04c1-427a-b77f-686e3f8409ed-kube-api-access-vrv99\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609823 4988 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609831 4988 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/11f8f692-04c1-427a-b77f-686e3f8409ed-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609839 4988 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/11f8f692-04c1-427a-b77f-686e3f8409ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609847 4988 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f8f692-04c1-427a-b77f-686e3f8409ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.609885 4988 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.633752 4988 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.711951 4988 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.990517 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"11f8f692-04c1-427a-b77f-686e3f8409ed","Type":"ContainerDied","Data":"90e6032fb12ff90c21aebb4c6acdbc7d4b034023bee0ecb29434c54f2518a634"} Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.990560 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e6032fb12ff90c21aebb4c6acdbc7d4b034023bee0ecb29434c54f2518a634" Nov 23 09:30:17 crc kubenswrapper[4988]: I1123 09:30:17.990636 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:30:23 crc kubenswrapper[4988]: I1123 09:30:23.621520 4988 scope.go:117] "RemoveContainer" containerID="82ff6a8af624a8dc96afa67105deda9ca7bea70b7be9c8d71bc97ad1712edee5" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.435351 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 09:30:28 crc kubenswrapper[4988]: E1123 09:30:28.436429 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f8f692-04c1-427a-b77f-686e3f8409ed" containerName="tempest-tests-tempest-tests-runner" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.436450 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f8f692-04c1-427a-b77f-686e3f8409ed" containerName="tempest-tests-tempest-tests-runner" Nov 23 09:30:28 crc kubenswrapper[4988]: E1123 09:30:28.436460 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfc9378-bd01-418c-a72e-86e4b4a45fb6" containerName="collect-profiles" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.436468 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfc9378-bd01-418c-a72e-86e4b4a45fb6" containerName="collect-profiles" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.436698 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfc9378-bd01-418c-a72e-86e4b4a45fb6" containerName="collect-profiles" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.436770 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f8f692-04c1-427a-b77f-686e3f8409ed" containerName="tempest-tests-tempest-tests-runner" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.437663 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.440235 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-q8l7k" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.450628 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.558501 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55sgg\" (UniqueName: \"kubernetes.io/projected/416d4162-3900-4408-8882-0944facebc82-kube-api-access-55sgg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.558632 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.660073 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55sgg\" (UniqueName: \"kubernetes.io/projected/416d4162-3900-4408-8882-0944facebc82-kube-api-access-55sgg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.660253 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.660774 4988 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.688294 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55sgg\" (UniqueName: \"kubernetes.io/projected/416d4162-3900-4408-8882-0944facebc82-kube-api-access-55sgg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.725412 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"416d4162-3900-4408-8882-0944facebc82\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:28 crc kubenswrapper[4988]: I1123 09:30:28.769700 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 09:30:29 crc kubenswrapper[4988]: I1123 09:30:29.228382 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 09:30:29 crc kubenswrapper[4988]: W1123 09:30:29.236734 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416d4162_3900_4408_8882_0944facebc82.slice/crio-c07b100db85633309cfbe9c2fed6732809f808efb47e06e584bb67420913cf8c WatchSource:0}: Error finding container c07b100db85633309cfbe9c2fed6732809f808efb47e06e584bb67420913cf8c: Status 404 returned error can't find the container with id c07b100db85633309cfbe9c2fed6732809f808efb47e06e584bb67420913cf8c Nov 23 09:30:30 crc kubenswrapper[4988]: I1123 09:30:30.130833 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"416d4162-3900-4408-8882-0944facebc82","Type":"ContainerStarted","Data":"c07b100db85633309cfbe9c2fed6732809f808efb47e06e584bb67420913cf8c"} Nov 23 09:30:31 crc kubenswrapper[4988]: I1123 09:30:31.146082 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"416d4162-3900-4408-8882-0944facebc82","Type":"ContainerStarted","Data":"6dcc8bfc9e70371262e546ece20b2da4539a8404c00cd9df7c85627f9fc0c406"} Nov 23 09:30:31 crc kubenswrapper[4988]: I1123 09:30:31.172678 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.908987709 podStartE2EDuration="3.17265047s" podCreationTimestamp="2025-11-23 09:30:28 +0000 UTC" firstStartedPulling="2025-11-23 09:30:29.239057662 +0000 UTC m=+9881.547570435" lastFinishedPulling="2025-11-23 09:30:30.502720423 +0000 UTC m=+9882.811233196" observedRunningTime="2025-11-23 09:30:31.164885328 +0000 UTC m=+9883.473398101" watchObservedRunningTime="2025-11-23 09:30:31.17265047 +0000 UTC m=+9883.481163233" Nov 23 09:30:51 crc kubenswrapper[4988]: I1123 09:30:51.672535 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:30:51 crc kubenswrapper[4988]: I1123 09:30:51.673266 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:31:21 crc kubenswrapper[4988]: I1123 09:31:21.672928 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:31:21 crc kubenswrapper[4988]: I1123 09:31:21.673477 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.498479 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvk2/must-gather-t22m6"] Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.500594 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.503098 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rbvk2"/"kube-root-ca.crt" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.503145 4988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rbvk2"/"openshift-service-ca.crt" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.503160 4988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rbvk2"/"default-dockercfg-v8x6f" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.509123 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rbvk2/must-gather-t22m6"] Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.563322 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.563385 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psrb8\" (UniqueName: \"kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.664992 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.665047 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psrb8\" (UniqueName: \"kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.665798 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.687936 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psrb8\" (UniqueName: \"kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8\") pod \"must-gather-t22m6\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:27 crc kubenswrapper[4988]: I1123 09:31:27.818085 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:31:28 crc kubenswrapper[4988]: I1123 09:31:28.320579 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:31:28 crc kubenswrapper[4988]: I1123 09:31:28.326797 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rbvk2/must-gather-t22m6"] Nov 23 09:31:28 crc kubenswrapper[4988]: I1123 09:31:28.796088 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/must-gather-t22m6" event={"ID":"0369f4b3-b212-4bbf-b150-c4264029e027","Type":"ContainerStarted","Data":"601e55d722d070bde7e21dda34dbc58ca0867f9fdd14719ec442821ee0130990"} Nov 23 09:31:34 crc kubenswrapper[4988]: I1123 09:31:34.856153 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/must-gather-t22m6" event={"ID":"0369f4b3-b212-4bbf-b150-c4264029e027","Type":"ContainerStarted","Data":"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901"} Nov 23 09:31:35 crc kubenswrapper[4988]: I1123 09:31:35.872634 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/must-gather-t22m6" event={"ID":"0369f4b3-b212-4bbf-b150-c4264029e027","Type":"ContainerStarted","Data":"47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd"} Nov 23 09:31:35 crc kubenswrapper[4988]: I1123 09:31:35.902824 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rbvk2/must-gather-t22m6" podStartSLOduration=2.7475415180000002 podStartE2EDuration="8.902803218s" podCreationTimestamp="2025-11-23 09:31:27 +0000 UTC" firstStartedPulling="2025-11-23 09:31:28.320296284 +0000 UTC m=+9940.628809057" lastFinishedPulling="2025-11-23 09:31:34.475557954 +0000 UTC m=+9946.784070757" observedRunningTime="2025-11-23 09:31:35.893877657 +0000 UTC m=+9948.202390420" watchObservedRunningTime="2025-11-23 09:31:35.902803218 +0000 UTC m=+9948.211315981" Nov 23 09:31:37 crc kubenswrapper[4988]: E1123 09:31:37.859794 4988 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.176:53894->38.102.83.176:33549: write tcp 38.102.83.176:53894->38.102.83.176:33549: write: broken pipe Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.650730 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-8fms8"] Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.652239 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.737879 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbn5c\" (UniqueName: \"kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.738226 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.839637 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbn5c\" (UniqueName: \"kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.839680 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.839917 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.866262 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbn5c\" (UniqueName: \"kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c\") pod \"crc-debug-8fms8\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:38 crc kubenswrapper[4988]: I1123 09:31:38.974768 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:31:39 crc kubenswrapper[4988]: I1123 09:31:39.911445 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" event={"ID":"f0e5525e-4d56-4c49-b48a-d27edefc7d79","Type":"ContainerStarted","Data":"0a219def7aefd16190f6a7625609a1e9f447df05f555c1cbb86e9e6162513f2c"} Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.465835 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.480181 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.487843 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.536914 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.537022 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.537062 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nrz\" (UniqueName: \"kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.639407 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.639538 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.639580 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nrz\" (UniqueName: \"kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.640009 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.640274 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.660846 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nrz\" (UniqueName: \"kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz\") pod \"redhat-operators-njp97\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:43 crc kubenswrapper[4988]: I1123 09:31:43.816718 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:31:44 crc kubenswrapper[4988]: I1123 09:31:44.311705 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:31:51 crc kubenswrapper[4988]: I1123 09:31:51.672435 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:31:51 crc kubenswrapper[4988]: I1123 09:31:51.673013 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:31:51 crc kubenswrapper[4988]: I1123 09:31:51.673059 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:31:51 crc kubenswrapper[4988]: I1123 09:31:51.673813 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:31:51 crc kubenswrapper[4988]: I1123 09:31:51.673858 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9" gracePeriod=600 Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.046369 4988 generic.go:334] "Generic (PLEG): container finished" podID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerID="91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac" exitCode=0 Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.046477 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerDied","Data":"91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac"} Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.046966 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerStarted","Data":"b36b92226aa8d305e43033dfa580f936e62ca04dd1b5912081311f5cf5188f00"} Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.052642 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9" exitCode=0 Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.052676 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9"} Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.052738 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c"} Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.052761 4988 scope.go:117] "RemoveContainer" containerID="ef0736d2c389d089a25deb6f7a0950dfece36e2e762a2465dcb361a6e0601edf" Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.063767 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" event={"ID":"f0e5525e-4d56-4c49-b48a-d27edefc7d79","Type":"ContainerStarted","Data":"411603e1e98015d563049785e03ec6b58aa2d95c1f688013e3224e6983628b50"} Nov 23 09:31:52 crc kubenswrapper[4988]: I1123 09:31:52.089579 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" podStartSLOduration=1.7450438689999999 podStartE2EDuration="14.089560766s" podCreationTimestamp="2025-11-23 09:31:38 +0000 UTC" firstStartedPulling="2025-11-23 09:31:39.014477733 +0000 UTC m=+9951.322990496" lastFinishedPulling="2025-11-23 09:31:51.35899463 +0000 UTC m=+9963.667507393" observedRunningTime="2025-11-23 09:31:52.086951002 +0000 UTC m=+9964.395463775" watchObservedRunningTime="2025-11-23 09:31:52.089560766 +0000 UTC m=+9964.398073529" Nov 23 09:31:53 crc kubenswrapper[4988]: I1123 09:31:53.076280 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerStarted","Data":"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6"} Nov 23 09:31:58 crc kubenswrapper[4988]: I1123 09:31:58.140369 4988 generic.go:334] "Generic (PLEG): container finished" podID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerID="ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6" exitCode=0 Nov 23 09:31:58 crc kubenswrapper[4988]: I1123 09:31:58.140432 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerDied","Data":"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6"} Nov 23 09:32:06 crc kubenswrapper[4988]: I1123 09:32:06.217823 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerStarted","Data":"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af"} Nov 23 09:32:06 crc kubenswrapper[4988]: I1123 09:32:06.234532 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-njp97" podStartSLOduration=9.899265588 podStartE2EDuration="23.23451596s" podCreationTimestamp="2025-11-23 09:31:43 +0000 UTC" firstStartedPulling="2025-11-23 09:31:52.048468161 +0000 UTC m=+9964.356980924" lastFinishedPulling="2025-11-23 09:32:05.383718523 +0000 UTC m=+9977.692231296" observedRunningTime="2025-11-23 09:32:06.233402313 +0000 UTC m=+9978.541915086" watchObservedRunningTime="2025-11-23 09:32:06.23451596 +0000 UTC m=+9978.543028723" Nov 23 09:32:13 crc kubenswrapper[4988]: I1123 09:32:13.828698 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:13 crc kubenswrapper[4988]: I1123 09:32:13.829332 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:14 crc kubenswrapper[4988]: I1123 09:32:14.880269 4988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-njp97" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="registry-server" probeResult="failure" output=< Nov 23 09:32:14 crc kubenswrapper[4988]: timeout: failed to connect service ":50051" within 1s Nov 23 09:32:14 crc kubenswrapper[4988]: > Nov 23 09:32:23 crc kubenswrapper[4988]: I1123 09:32:23.879241 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:23 crc kubenswrapper[4988]: I1123 09:32:23.928007 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:24 crc kubenswrapper[4988]: I1123 09:32:24.115371 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.408004 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-njp97" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="registry-server" containerID="cri-o://96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af" gracePeriod=2 Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.922106 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.980305 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities\") pod \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.980470 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content\") pod \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.980617 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95nrz\" (UniqueName: \"kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz\") pod \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\" (UID: \"f2b6e905-5b50-4f64-aa10-43d7eb15df7e\") " Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.981718 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities" (OuterVolumeSpecName: "utilities") pod "f2b6e905-5b50-4f64-aa10-43d7eb15df7e" (UID: "f2b6e905-5b50-4f64-aa10-43d7eb15df7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:32:25 crc kubenswrapper[4988]: I1123 09:32:25.995527 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz" (OuterVolumeSpecName: "kube-api-access-95nrz") pod "f2b6e905-5b50-4f64-aa10-43d7eb15df7e" (UID: "f2b6e905-5b50-4f64-aa10-43d7eb15df7e"). InnerVolumeSpecName "kube-api-access-95nrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.081804 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2b6e905-5b50-4f64-aa10-43d7eb15df7e" (UID: "f2b6e905-5b50-4f64-aa10-43d7eb15df7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.083101 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.083129 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95nrz\" (UniqueName: \"kubernetes.io/projected/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-kube-api-access-95nrz\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.083142 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2b6e905-5b50-4f64-aa10-43d7eb15df7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.425259 4988 generic.go:334] "Generic (PLEG): container finished" podID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerID="96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af" exitCode=0 Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.425357 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerDied","Data":"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af"} Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.426415 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-njp97" event={"ID":"f2b6e905-5b50-4f64-aa10-43d7eb15df7e","Type":"ContainerDied","Data":"b36b92226aa8d305e43033dfa580f936e62ca04dd1b5912081311f5cf5188f00"} Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.425417 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-njp97" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.426452 4988 scope.go:117] "RemoveContainer" containerID="96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.482289 4988 scope.go:117] "RemoveContainer" containerID="ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.486954 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.512436 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-njp97"] Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.514295 4988 scope.go:117] "RemoveContainer" containerID="91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.566834 4988 scope.go:117] "RemoveContainer" containerID="96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af" Nov 23 09:32:26 crc kubenswrapper[4988]: E1123 09:32:26.567398 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af\": container with ID starting with 96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af not found: ID does not exist" containerID="96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.567437 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af"} err="failed to get container status \"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af\": rpc error: code = NotFound desc = could not find container \"96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af\": container with ID starting with 96bd74bf7ae91183221dc8685c0b4659a57830dfee3849b0b4a59686b5c826af not found: ID does not exist" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.567457 4988 scope.go:117] "RemoveContainer" containerID="ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6" Nov 23 09:32:26 crc kubenswrapper[4988]: E1123 09:32:26.567703 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6\": container with ID starting with ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6 not found: ID does not exist" containerID="ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.567780 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6"} err="failed to get container status \"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6\": rpc error: code = NotFound desc = could not find container \"ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6\": container with ID starting with ec760ebfd4c76bd5dd6f95fef708f41b674fca524b36eec39b20fa4bcd1abbd6 not found: ID does not exist" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.567852 4988 scope.go:117] "RemoveContainer" containerID="91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac" Nov 23 09:32:26 crc kubenswrapper[4988]: E1123 09:32:26.568237 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac\": container with ID starting with 91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac not found: ID does not exist" containerID="91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac" Nov 23 09:32:26 crc kubenswrapper[4988]: I1123 09:32:26.568256 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac"} err="failed to get container status \"91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac\": rpc error: code = NotFound desc = could not find container \"91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac\": container with ID starting with 91aacc2d3eab0aeb05f1bb593ed9dcbd71ece572051e0a35a6d406914f6d39ac not found: ID does not exist" Nov 23 09:32:28 crc kubenswrapper[4988]: I1123 09:32:28.518899 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" path="/var/lib/kubelet/pods/f2b6e905-5b50-4f64-aa10-43d7eb15df7e/volumes" Nov 23 09:32:46 crc kubenswrapper[4988]: I1123 09:32:46.662560 4988 generic.go:334] "Generic (PLEG): container finished" podID="f0e5525e-4d56-4c49-b48a-d27edefc7d79" containerID="411603e1e98015d563049785e03ec6b58aa2d95c1f688013e3224e6983628b50" exitCode=0 Nov 23 09:32:46 crc kubenswrapper[4988]: I1123 09:32:46.662648 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" event={"ID":"f0e5525e-4d56-4c49-b48a-d27edefc7d79","Type":"ContainerDied","Data":"411603e1e98015d563049785e03ec6b58aa2d95c1f688013e3224e6983628b50"} Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.384640 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.435716 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-8fms8"] Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.449074 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-8fms8"] Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.480596 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host\") pod \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.480669 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbn5c\" (UniqueName: \"kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c\") pod \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\" (UID: \"f0e5525e-4d56-4c49-b48a-d27edefc7d79\") " Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.480695 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host" (OuterVolumeSpecName: "host") pod "f0e5525e-4d56-4c49-b48a-d27edefc7d79" (UID: "f0e5525e-4d56-4c49-b48a-d27edefc7d79"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.481525 4988 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0e5525e-4d56-4c49-b48a-d27edefc7d79-host\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.490049 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c" (OuterVolumeSpecName: "kube-api-access-fbn5c") pod "f0e5525e-4d56-4c49-b48a-d27edefc7d79" (UID: "f0e5525e-4d56-4c49-b48a-d27edefc7d79"). InnerVolumeSpecName "kube-api-access-fbn5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.511994 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e5525e-4d56-4c49-b48a-d27edefc7d79" path="/var/lib/kubelet/pods/f0e5525e-4d56-4c49-b48a-d27edefc7d79/volumes" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.585820 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbn5c\" (UniqueName: \"kubernetes.io/projected/f0e5525e-4d56-4c49-b48a-d27edefc7d79-kube-api-access-fbn5c\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.685539 4988 scope.go:117] "RemoveContainer" containerID="411603e1e98015d563049785e03ec6b58aa2d95c1f688013e3224e6983628b50" Nov 23 09:32:48 crc kubenswrapper[4988]: I1123 09:32:48.685579 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-8fms8" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.628894 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-mlwqv"] Nov 23 09:32:49 crc kubenswrapper[4988]: E1123 09:32:49.629711 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="registry-server" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.629727 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="registry-server" Nov 23 09:32:49 crc kubenswrapper[4988]: E1123 09:32:49.629751 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="extract-content" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.629759 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="extract-content" Nov 23 09:32:49 crc kubenswrapper[4988]: E1123 09:32:49.629790 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e5525e-4d56-4c49-b48a-d27edefc7d79" containerName="container-00" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.629799 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e5525e-4d56-4c49-b48a-d27edefc7d79" containerName="container-00" Nov 23 09:32:49 crc kubenswrapper[4988]: E1123 09:32:49.629819 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="extract-utilities" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.629827 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="extract-utilities" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.630054 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e5525e-4d56-4c49-b48a-d27edefc7d79" containerName="container-00" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.630074 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b6e905-5b50-4f64-aa10-43d7eb15df7e" containerName="registry-server" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.630883 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.707558 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.707898 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndz6b\" (UniqueName: \"kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.818087 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.818757 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndz6b\" (UniqueName: \"kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.822655 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.859293 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndz6b\" (UniqueName: \"kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b\") pod \"crc-debug-mlwqv\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:49 crc kubenswrapper[4988]: I1123 09:32:49.953301 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:50 crc kubenswrapper[4988]: I1123 09:32:50.704122 4988 generic.go:334] "Generic (PLEG): container finished" podID="ac33c0de-0f78-4096-8e71-356da9b9b789" containerID="e8defc95a8b64e3a2f1ef172c794c16d5a217efb869ca228f900f911e0e582c6" exitCode=0 Nov 23 09:32:50 crc kubenswrapper[4988]: I1123 09:32:50.704458 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" event={"ID":"ac33c0de-0f78-4096-8e71-356da9b9b789","Type":"ContainerDied","Data":"e8defc95a8b64e3a2f1ef172c794c16d5a217efb869ca228f900f911e0e582c6"} Nov 23 09:32:50 crc kubenswrapper[4988]: I1123 09:32:50.704488 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" event={"ID":"ac33c0de-0f78-4096-8e71-356da9b9b789","Type":"ContainerStarted","Data":"cbf140516883a29b43fa338c1536907eec3bd608853253501854c94c7a78b830"} Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.824657 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.854073 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host\") pod \"ac33c0de-0f78-4096-8e71-356da9b9b789\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.854148 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host" (OuterVolumeSpecName: "host") pod "ac33c0de-0f78-4096-8e71-356da9b9b789" (UID: "ac33c0de-0f78-4096-8e71-356da9b9b789"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.854354 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndz6b\" (UniqueName: \"kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b\") pod \"ac33c0de-0f78-4096-8e71-356da9b9b789\" (UID: \"ac33c0de-0f78-4096-8e71-356da9b9b789\") " Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.854860 4988 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac33c0de-0f78-4096-8e71-356da9b9b789-host\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.863596 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b" (OuterVolumeSpecName: "kube-api-access-ndz6b") pod "ac33c0de-0f78-4096-8e71-356da9b9b789" (UID: "ac33c0de-0f78-4096-8e71-356da9b9b789"). InnerVolumeSpecName "kube-api-access-ndz6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:32:51 crc kubenswrapper[4988]: I1123 09:32:51.956135 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndz6b\" (UniqueName: \"kubernetes.io/projected/ac33c0de-0f78-4096-8e71-356da9b9b789-kube-api-access-ndz6b\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:52 crc kubenswrapper[4988]: I1123 09:32:52.722519 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" event={"ID":"ac33c0de-0f78-4096-8e71-356da9b9b789","Type":"ContainerDied","Data":"cbf140516883a29b43fa338c1536907eec3bd608853253501854c94c7a78b830"} Nov 23 09:32:52 crc kubenswrapper[4988]: I1123 09:32:52.722570 4988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbf140516883a29b43fa338c1536907eec3bd608853253501854c94c7a78b830" Nov 23 09:32:52 crc kubenswrapper[4988]: I1123 09:32:52.722580 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-mlwqv" Nov 23 09:32:53 crc kubenswrapper[4988]: I1123 09:32:53.416257 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-mlwqv"] Nov 23 09:32:53 crc kubenswrapper[4988]: I1123 09:32:53.427093 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-mlwqv"] Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.512972 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac33c0de-0f78-4096-8e71-356da9b9b789" path="/var/lib/kubelet/pods/ac33c0de-0f78-4096-8e71-356da9b9b789/volumes" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.613928 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-xpww8"] Nov 23 09:32:54 crc kubenswrapper[4988]: E1123 09:32:54.614469 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac33c0de-0f78-4096-8e71-356da9b9b789" containerName="container-00" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.614499 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac33c0de-0f78-4096-8e71-356da9b9b789" containerName="container-00" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.614791 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac33c0de-0f78-4096-8e71-356da9b9b789" containerName="container-00" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.615665 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.709737 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.710273 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r6b7\" (UniqueName: \"kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.812348 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.812426 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r6b7\" (UniqueName: \"kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.812516 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.834700 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r6b7\" (UniqueName: \"kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7\") pod \"crc-debug-xpww8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:54 crc kubenswrapper[4988]: I1123 09:32:54.949329 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:55 crc kubenswrapper[4988]: I1123 09:32:55.769731 4988 generic.go:334] "Generic (PLEG): container finished" podID="e6ff8007-5063-446b-91ec-90916c6527c8" containerID="4780a810f41cacbfb404f8a78a24dfb9fc2a01ec92dbbca80ebee4f88d047a6c" exitCode=0 Nov 23 09:32:55 crc kubenswrapper[4988]: I1123 09:32:55.769822 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" event={"ID":"e6ff8007-5063-446b-91ec-90916c6527c8","Type":"ContainerDied","Data":"4780a810f41cacbfb404f8a78a24dfb9fc2a01ec92dbbca80ebee4f88d047a6c"} Nov 23 09:32:55 crc kubenswrapper[4988]: I1123 09:32:55.770311 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" event={"ID":"e6ff8007-5063-446b-91ec-90916c6527c8","Type":"ContainerStarted","Data":"7a5ce7391e55e0c16c62946cc10a596b6f4c9ca9935fdb784246bd517279e221"} Nov 23 09:32:55 crc kubenswrapper[4988]: I1123 09:32:55.821636 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-xpww8"] Nov 23 09:32:55 crc kubenswrapper[4988]: I1123 09:32:55.836739 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvk2/crc-debug-xpww8"] Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.921763 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.958019 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host\") pod \"e6ff8007-5063-446b-91ec-90916c6527c8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.958215 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host" (OuterVolumeSpecName: "host") pod "e6ff8007-5063-446b-91ec-90916c6527c8" (UID: "e6ff8007-5063-446b-91ec-90916c6527c8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.958334 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r6b7\" (UniqueName: \"kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7\") pod \"e6ff8007-5063-446b-91ec-90916c6527c8\" (UID: \"e6ff8007-5063-446b-91ec-90916c6527c8\") " Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.959177 4988 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e6ff8007-5063-446b-91ec-90916c6527c8-host\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:56 crc kubenswrapper[4988]: I1123 09:32:56.967037 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7" (OuterVolumeSpecName: "kube-api-access-6r6b7") pod "e6ff8007-5063-446b-91ec-90916c6527c8" (UID: "e6ff8007-5063-446b-91ec-90916c6527c8"). InnerVolumeSpecName "kube-api-access-6r6b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:32:57 crc kubenswrapper[4988]: I1123 09:32:57.062286 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r6b7\" (UniqueName: \"kubernetes.io/projected/e6ff8007-5063-446b-91ec-90916c6527c8-kube-api-access-6r6b7\") on node \"crc\" DevicePath \"\"" Nov 23 09:32:57 crc kubenswrapper[4988]: I1123 09:32:57.796516 4988 scope.go:117] "RemoveContainer" containerID="4780a810f41cacbfb404f8a78a24dfb9fc2a01ec92dbbca80ebee4f88d047a6c" Nov 23 09:32:57 crc kubenswrapper[4988]: I1123 09:32:57.796615 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/crc-debug-xpww8" Nov 23 09:32:58 crc kubenswrapper[4988]: I1123 09:32:58.511973 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ff8007-5063-446b-91ec-90916c6527c8" path="/var/lib/kubelet/pods/e6ff8007-5063-446b-91ec-90916c6527c8/volumes" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.373721 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:18 crc kubenswrapper[4988]: E1123 09:33:18.375131 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ff8007-5063-446b-91ec-90916c6527c8" containerName="container-00" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.375160 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ff8007-5063-446b-91ec-90916c6527c8" containerName="container-00" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.375649 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ff8007-5063-446b-91ec-90916c6527c8" containerName="container-00" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.378517 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.391977 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.540213 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.540324 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.540404 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9vk\" (UniqueName: \"kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.642764 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.642915 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.643012 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9vk\" (UniqueName: \"kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.643744 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.644026 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.673498 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9vk\" (UniqueName: \"kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk\") pod \"certified-operators-t9hk2\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:18 crc kubenswrapper[4988]: I1123 09:33:18.735971 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:19 crc kubenswrapper[4988]: I1123 09:33:19.090589 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:20 crc kubenswrapper[4988]: I1123 09:33:20.080785 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerID="f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e" exitCode=0 Nov 23 09:33:20 crc kubenswrapper[4988]: I1123 09:33:20.081064 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerDied","Data":"f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e"} Nov 23 09:33:20 crc kubenswrapper[4988]: I1123 09:33:20.081094 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerStarted","Data":"25322bbbfeea21bb2b3e6e7eee8e91b0283e75d82f0c13079f6603402c4317e7"} Nov 23 09:33:21 crc kubenswrapper[4988]: I1123 09:33:21.096088 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerStarted","Data":"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724"} Nov 23 09:33:22 crc kubenswrapper[4988]: I1123 09:33:22.117566 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerID="e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724" exitCode=0 Nov 23 09:33:22 crc kubenswrapper[4988]: I1123 09:33:22.117657 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerDied","Data":"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724"} Nov 23 09:33:23 crc kubenswrapper[4988]: I1123 09:33:23.133844 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerStarted","Data":"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a"} Nov 23 09:33:23 crc kubenswrapper[4988]: I1123 09:33:23.158039 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t9hk2" podStartSLOduration=2.704923921 podStartE2EDuration="5.158019689s" podCreationTimestamp="2025-11-23 09:33:18 +0000 UTC" firstStartedPulling="2025-11-23 09:33:20.08300253 +0000 UTC m=+10052.391515293" lastFinishedPulling="2025-11-23 09:33:22.536098258 +0000 UTC m=+10054.844611061" observedRunningTime="2025-11-23 09:33:23.153590599 +0000 UTC m=+10055.462103362" watchObservedRunningTime="2025-11-23 09:33:23.158019689 +0000 UTC m=+10055.466532462" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.753402 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.757605 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.767645 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.819639 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.819958 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.820172 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmzwz\" (UniqueName: \"kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.921955 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.922296 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.922387 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmzwz\" (UniqueName: \"kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.923004 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.923172 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:25 crc kubenswrapper[4988]: I1123 09:33:25.947607 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmzwz\" (UniqueName: \"kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz\") pod \"community-operators-xnm9k\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:26 crc kubenswrapper[4988]: I1123 09:33:26.091522 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:26 crc kubenswrapper[4988]: I1123 09:33:26.768724 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:26 crc kubenswrapper[4988]: W1123 09:33:26.775675 4988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbebe2475_31e1_4008_bd53_c71d60b8e3cf.slice/crio-0b255c207cff0cea65a886affd70c43559a0f68fa161480e23d3de8f424afb58 WatchSource:0}: Error finding container 0b255c207cff0cea65a886affd70c43559a0f68fa161480e23d3de8f424afb58: Status 404 returned error can't find the container with id 0b255c207cff0cea65a886affd70c43559a0f68fa161480e23d3de8f424afb58 Nov 23 09:33:27 crc kubenswrapper[4988]: I1123 09:33:27.183235 4988 generic.go:334] "Generic (PLEG): container finished" podID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerID="4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7" exitCode=0 Nov 23 09:33:27 crc kubenswrapper[4988]: I1123 09:33:27.183294 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerDied","Data":"4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7"} Nov 23 09:33:27 crc kubenswrapper[4988]: I1123 09:33:27.183326 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerStarted","Data":"0b255c207cff0cea65a886affd70c43559a0f68fa161480e23d3de8f424afb58"} Nov 23 09:33:28 crc kubenswrapper[4988]: I1123 09:33:28.737119 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:28 crc kubenswrapper[4988]: I1123 09:33:28.737819 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:28 crc kubenswrapper[4988]: I1123 09:33:28.808759 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:29 crc kubenswrapper[4988]: I1123 09:33:29.204832 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerStarted","Data":"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f"} Nov 23 09:33:29 crc kubenswrapper[4988]: I1123 09:33:29.275963 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:30 crc kubenswrapper[4988]: I1123 09:33:30.217332 4988 generic.go:334] "Generic (PLEG): container finished" podID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerID="8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f" exitCode=0 Nov 23 09:33:30 crc kubenswrapper[4988]: I1123 09:33:30.217419 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerDied","Data":"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f"} Nov 23 09:33:30 crc kubenswrapper[4988]: I1123 09:33:30.538113 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.231035 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t9hk2" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="registry-server" containerID="cri-o://0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a" gracePeriod=2 Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.232418 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerStarted","Data":"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b"} Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.267933 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xnm9k" podStartSLOduration=2.803726358 podStartE2EDuration="6.267912346s" podCreationTimestamp="2025-11-23 09:33:25 +0000 UTC" firstStartedPulling="2025-11-23 09:33:27.18516201 +0000 UTC m=+10059.493674773" lastFinishedPulling="2025-11-23 09:33:30.649347978 +0000 UTC m=+10062.957860761" observedRunningTime="2025-11-23 09:33:31.257500299 +0000 UTC m=+10063.566013072" watchObservedRunningTime="2025-11-23 09:33:31.267912346 +0000 UTC m=+10063.576425109" Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.834340 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.968068 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities\") pod \"cb3ea761-d896-49b6-b243-156adfdb47d4\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.968411 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content\") pod \"cb3ea761-d896-49b6-b243-156adfdb47d4\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.968616 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp9vk\" (UniqueName: \"kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk\") pod \"cb3ea761-d896-49b6-b243-156adfdb47d4\" (UID: \"cb3ea761-d896-49b6-b243-156adfdb47d4\") " Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.969257 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities" (OuterVolumeSpecName: "utilities") pod "cb3ea761-d896-49b6-b243-156adfdb47d4" (UID: "cb3ea761-d896-49b6-b243-156adfdb47d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:31 crc kubenswrapper[4988]: I1123 09:33:31.978185 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk" (OuterVolumeSpecName: "kube-api-access-sp9vk") pod "cb3ea761-d896-49b6-b243-156adfdb47d4" (UID: "cb3ea761-d896-49b6-b243-156adfdb47d4"). InnerVolumeSpecName "kube-api-access-sp9vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.013651 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb3ea761-d896-49b6-b243-156adfdb47d4" (UID: "cb3ea761-d896-49b6-b243-156adfdb47d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.071700 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp9vk\" (UniqueName: \"kubernetes.io/projected/cb3ea761-d896-49b6-b243-156adfdb47d4-kube-api-access-sp9vk\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.071754 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.071769 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb3ea761-d896-49b6-b243-156adfdb47d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.242694 4988 generic.go:334] "Generic (PLEG): container finished" podID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerID="0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a" exitCode=0 Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.242777 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t9hk2" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.242811 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerDied","Data":"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a"} Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.242870 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t9hk2" event={"ID":"cb3ea761-d896-49b6-b243-156adfdb47d4","Type":"ContainerDied","Data":"25322bbbfeea21bb2b3e6e7eee8e91b0283e75d82f0c13079f6603402c4317e7"} Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.242892 4988 scope.go:117] "RemoveContainer" containerID="0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.272882 4988 scope.go:117] "RemoveContainer" containerID="e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.273396 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.288357 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t9hk2"] Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.292903 4988 scope.go:117] "RemoveContainer" containerID="f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.360997 4988 scope.go:117] "RemoveContainer" containerID="0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a" Nov 23 09:33:32 crc kubenswrapper[4988]: E1123 09:33:32.362539 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a\": container with ID starting with 0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a not found: ID does not exist" containerID="0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.362568 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a"} err="failed to get container status \"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a\": rpc error: code = NotFound desc = could not find container \"0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a\": container with ID starting with 0df446433e856edce37714108516b0ba89592e052f8c9ba6c4ea3f3965c21a2a not found: ID does not exist" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.362588 4988 scope.go:117] "RemoveContainer" containerID="e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724" Nov 23 09:33:32 crc kubenswrapper[4988]: E1123 09:33:32.362861 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724\": container with ID starting with e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724 not found: ID does not exist" containerID="e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.362880 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724"} err="failed to get container status \"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724\": rpc error: code = NotFound desc = could not find container \"e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724\": container with ID starting with e619fbc599c664bdc6cc47dcf4faca0a6d57577424b0f0510e1510a711d37724 not found: ID does not exist" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.362892 4988 scope.go:117] "RemoveContainer" containerID="f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e" Nov 23 09:33:32 crc kubenswrapper[4988]: E1123 09:33:32.363093 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e\": container with ID starting with f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e not found: ID does not exist" containerID="f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.363113 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e"} err="failed to get container status \"f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e\": rpc error: code = NotFound desc = could not find container \"f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e\": container with ID starting with f7550130de0817c38b4a4fac79f6d2a679805e1a44ec1f5d2800647aaa758a7e not found: ID does not exist" Nov 23 09:33:32 crc kubenswrapper[4988]: I1123 09:33:32.506542 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" path="/var/lib/kubelet/pods/cb3ea761-d896-49b6-b243-156adfdb47d4/volumes" Nov 23 09:33:36 crc kubenswrapper[4988]: I1123 09:33:36.093060 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:36 crc kubenswrapper[4988]: I1123 09:33:36.094422 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:36 crc kubenswrapper[4988]: I1123 09:33:36.186259 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:36 crc kubenswrapper[4988]: I1123 09:33:36.361502 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:37 crc kubenswrapper[4988]: I1123 09:33:37.129096 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:38 crc kubenswrapper[4988]: I1123 09:33:38.315417 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xnm9k" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="registry-server" containerID="cri-o://49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b" gracePeriod=2 Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.216528 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.329137 4988 generic.go:334] "Generic (PLEG): container finished" podID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerID="49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b" exitCode=0 Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.329202 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerDied","Data":"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b"} Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.329227 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xnm9k" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.329250 4988 scope.go:117] "RemoveContainer" containerID="49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.329235 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xnm9k" event={"ID":"bebe2475-31e1-4008-bd53-c71d60b8e3cf","Type":"ContainerDied","Data":"0b255c207cff0cea65a886affd70c43559a0f68fa161480e23d3de8f424afb58"} Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.338187 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities\") pod \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.338425 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content\") pod \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.338499 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmzwz\" (UniqueName: \"kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz\") pod \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\" (UID: \"bebe2475-31e1-4008-bd53-c71d60b8e3cf\") " Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.339723 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities" (OuterVolumeSpecName: "utilities") pod "bebe2475-31e1-4008-bd53-c71d60b8e3cf" (UID: "bebe2475-31e1-4008-bd53-c71d60b8e3cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.350371 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz" (OuterVolumeSpecName: "kube-api-access-xmzwz") pod "bebe2475-31e1-4008-bd53-c71d60b8e3cf" (UID: "bebe2475-31e1-4008-bd53-c71d60b8e3cf"). InnerVolumeSpecName "kube-api-access-xmzwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.354532 4988 scope.go:117] "RemoveContainer" containerID="8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.400792 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebe2475-31e1-4008-bd53-c71d60b8e3cf" (UID: "bebe2475-31e1-4008-bd53-c71d60b8e3cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.407445 4988 scope.go:117] "RemoveContainer" containerID="4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.442425 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmzwz\" (UniqueName: \"kubernetes.io/projected/bebe2475-31e1-4008-bd53-c71d60b8e3cf-kube-api-access-xmzwz\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.442479 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.442503 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebe2475-31e1-4008-bd53-c71d60b8e3cf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.532914 4988 scope.go:117] "RemoveContainer" containerID="49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b" Nov 23 09:33:39 crc kubenswrapper[4988]: E1123 09:33:39.533849 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b\": container with ID starting with 49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b not found: ID does not exist" containerID="49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.533953 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b"} err="failed to get container status \"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b\": rpc error: code = NotFound desc = could not find container \"49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b\": container with ID starting with 49db8fc56817d004c9517617219bf74ff3f950b344bfd9ca997e91be524b262b not found: ID does not exist" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.533994 4988 scope.go:117] "RemoveContainer" containerID="8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f" Nov 23 09:33:39 crc kubenswrapper[4988]: E1123 09:33:39.534794 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f\": container with ID starting with 8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f not found: ID does not exist" containerID="8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.534837 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f"} err="failed to get container status \"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f\": rpc error: code = NotFound desc = could not find container \"8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f\": container with ID starting with 8fe2eacf3bf5041e1150f3b7cb1e820f4c5fd3410de46151bfdf7599a800361f not found: ID does not exist" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.534863 4988 scope.go:117] "RemoveContainer" containerID="4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7" Nov 23 09:33:39 crc kubenswrapper[4988]: E1123 09:33:39.535306 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7\": container with ID starting with 4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7 not found: ID does not exist" containerID="4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.535351 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7"} err="failed to get container status \"4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7\": rpc error: code = NotFound desc = could not find container \"4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7\": container with ID starting with 4e75c32129207ba9b1484015f3e1a158fc25bbe0d7ea3df092a872d05cabc1c7 not found: ID does not exist" Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.686334 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:39 crc kubenswrapper[4988]: I1123 09:33:39.703978 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xnm9k"] Nov 23 09:33:40 crc kubenswrapper[4988]: I1123 09:33:40.506831 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" path="/var/lib/kubelet/pods/bebe2475-31e1-4008-bd53-c71d60b8e3cf/volumes" Nov 23 09:33:51 crc kubenswrapper[4988]: I1123 09:33:51.672974 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:33:51 crc kubenswrapper[4988]: I1123 09:33:51.673723 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:34:21 crc kubenswrapper[4988]: I1123 09:34:21.671903 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:34:21 crc kubenswrapper[4988]: I1123 09:34:21.672538 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.157366 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45368a7a-7b66-4d55-a8a7-a306d69e6858/init-config-reloader/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.284818 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45368a7a-7b66-4d55-a8a7-a306d69e6858/init-config-reloader/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.335114 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45368a7a-7b66-4d55-a8a7-a306d69e6858/alertmanager/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.362575 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_45368a7a-7b66-4d55-a8a7-a306d69e6858/config-reloader/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.529774 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ad8a49ac-30cb-4638-b369-fa9afad39287/aodh-api/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.576533 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ad8a49ac-30cb-4638-b369-fa9afad39287/aodh-evaluator/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.640350 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ad8a49ac-30cb-4638-b369-fa9afad39287/aodh-listener/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.750468 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ad8a49ac-30cb-4638-b369-fa9afad39287/aodh-notifier/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.788362 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74558fc978-jmz5k_d4e6ece2-9c04-46db-b25d-098cea0e6fde/barbican-api/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.854717 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-74558fc978-jmz5k_d4e6ece2-9c04-46db-b25d-098cea0e6fde/barbican-api-log/0.log" Nov 23 09:34:47 crc kubenswrapper[4988]: I1123 09:34:47.927407 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57b89bfb9d-qwlsk_9b638fc0-e2a1-4624-aa58-525d3e06ff6e/barbican-keystone-listener/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.109144 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6654778d7f-p8j7f_3ff9e262-d8ae-457d-add2-26dc18b4e376/barbican-worker/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.266147 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6654778d7f-p8j7f_3ff9e262-d8ae-457d-add2-26dc18b4e376/barbican-worker-log/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.359763 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57b89bfb9d-qwlsk_9b638fc0-e2a1-4624-aa58-525d3e06ff6e/barbican-keystone-listener-log/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.459718 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-xn2d7_f045fce4-5fd2-4a28-8502-1b840639d64c/bootstrap-openstack-openstack-cell1/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.574330 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dce5c5d3-bdb1-4f2a-a12d-9e093f47262a/ceilometer-central-agent/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.619692 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dce5c5d3-bdb1-4f2a-a12d-9e093f47262a/ceilometer-notification-agent/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.671756 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dce5c5d3-bdb1-4f2a-a12d-9e093f47262a/proxy-httpd/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.777406 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_dce5c5d3-bdb1-4f2a-a12d-9e093f47262a/sg-core/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.886611 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3471cd03-a8d4-4923-a881-26121e7ceaef/cinder-api/0.log" Nov 23 09:34:48 crc kubenswrapper[4988]: I1123 09:34:48.891941 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3471cd03-a8d4-4923-a881-26121e7ceaef/cinder-api-log/0.log" Nov 23 09:34:49 crc kubenswrapper[4988]: I1123 09:34:49.131158 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d52991dd-0f33-4543-bbaf-5abe1ee31cbc/cinder-scheduler/0.log" Nov 23 09:34:49 crc kubenswrapper[4988]: I1123 09:34:49.168456 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d52991dd-0f33-4543-bbaf-5abe1ee31cbc/probe/0.log" Nov 23 09:34:49 crc kubenswrapper[4988]: I1123 09:34:49.280510 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-z8lv6_4ad1eff7-64f5-4009-8142-48fd2984fa39/configure-network-openstack-openstack-cell1/0.log" Nov 23 09:34:49 crc kubenswrapper[4988]: I1123 09:34:49.400912 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-tcnzd_cc698bbf-b5d4-48da-8527-8524710072c3/configure-os-openstack-openstack-cell1/0.log" Nov 23 09:34:49 crc kubenswrapper[4988]: I1123 09:34:49.538472 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fb59c9d47-q8wcr_7adc1cbc-f014-4666-b8ca-0400120c3c3e/init/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.611069 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-rjb52_84c4086a-6e6a-4f2a-8fc2-b5416199c070/download-cache-openstack-openstack-cell1/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.612058 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fb59c9d47-q8wcr_7adc1cbc-f014-4666-b8ca-0400120c3c3e/init/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.658748 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6fb59c9d47-q8wcr_7adc1cbc-f014-4666-b8ca-0400120c3c3e/dnsmasq-dns/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.845495 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cf7d70c6-f3ab-4ff4-a49b-23401df56b9e/glance-log/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.866729 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cf7d70c6-f3ab-4ff4-a49b-23401df56b9e/glance-httpd/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.951516 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_516e27e0-3465-45ab-9f04-76306f952a0b/glance-httpd/0.log" Nov 23 09:34:50 crc kubenswrapper[4988]: I1123 09:34:50.971487 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_516e27e0-3465-45ab-9f04-76306f952a0b/glance-log/0.log" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.406877 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-b646bc85-sjc6z_01c678f9-ada0-449b-b53e-d5831743585c/heat-engine/0.log" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.671877 4988 patch_prober.go:28] interesting pod/machine-config-daemon-jnwbw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.671933 4988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.671980 4988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.672544 4988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c"} pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.672589 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerName="machine-config-daemon" containerID="cri-o://e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" gracePeriod=600 Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.731764 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-568f945488-bz9kx_24372f9b-b303-4136-bbe5-30ffd8b21823/heat-api/0.log" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.791660 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767f5c4c7b-vjzcc_ead6449e-2c88-477e-97fb-1f6ad9bcc287/horizon/0.log" Nov 23 09:34:51 crc kubenswrapper[4988]: E1123 09:34:51.802132 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.939074 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-2vl98_c4a7b919-0961-4e11-9804-30c7c3771ef4/install-certs-openstack-openstack-cell1/0.log" Nov 23 09:34:51 crc kubenswrapper[4988]: I1123 09:34:51.990843 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-79f6fbff8d-p2hvc_26bd0fe8-c472-4378-b583-87868be32419/heat-cfnapi/0.log" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.125796 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-5f9wv_f89ffe71-9a0e-46e6-b982-02107da4ea39/install-os-openstack-openstack-cell1/0.log" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.125859 4988 generic.go:334] "Generic (PLEG): container finished" podID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" exitCode=0 Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.125891 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerDied","Data":"e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c"} Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.126366 4988 scope.go:117] "RemoveContainer" containerID="5281cb4afb6a0bf021c95e9ac802a383d473c5aa6df21196380a7b7479a165b9" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.127590 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:34:52 crc kubenswrapper[4988]: E1123 09:34:52.127871 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.326021 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-767f5c4c7b-vjzcc_ead6449e-2c88-477e-97fb-1f6ad9bcc287/horizon-log/0.log" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.727778 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29398141-wbcdx_146957be-9dc7-4f00-b343-f4d72b52ea64/keystone-cron/0.log" Nov 23 09:34:52 crc kubenswrapper[4988]: I1123 09:34:52.928716 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8de623af-ca19-44fc-a166-091a0977bd5d/kube-state-metrics/0.log" Nov 23 09:34:53 crc kubenswrapper[4988]: I1123 09:34:53.034540 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-8f6xc_9dacc32b-acd1-4160-914d-f3c2dfd68baa/libvirt-openstack-openstack-cell1/0.log" Nov 23 09:34:53 crc kubenswrapper[4988]: I1123 09:34:53.462674 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-77c6bb66bc-7wqr8_ee57b7ee-b5ed-40e9-bad1-b2e1b8dd566f/keystone-api/0.log" Nov 23 09:34:53 crc kubenswrapper[4988]: I1123 09:34:53.811119 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-798d8dcd57-qzxxn_733fec61-c6d0-4ab6-b4c6-adfa6f18290d/neutron-httpd/0.log" Nov 23 09:34:53 crc kubenswrapper[4988]: I1123 09:34:53.837957 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-nrd2w_890a5459-3557-40c1-a1fc-e44689e6525d/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 23 09:34:53 crc kubenswrapper[4988]: I1123 09:34:53.946590 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-798d8dcd57-qzxxn_733fec61-c6d0-4ab6-b4c6-adfa6f18290d/neutron-api/0.log" Nov 23 09:34:54 crc kubenswrapper[4988]: I1123 09:34:54.099052 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-t9lcd_5d4220c3-c550-45e9-be03-fb88df750921/neutron-metadata-openstack-openstack-cell1/0.log" Nov 23 09:34:54 crc kubenswrapper[4988]: I1123 09:34:54.214833 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-gvl89_9667739d-8f5f-4d13-8054-ed5d92987432/neutron-sriov-openstack-openstack-cell1/0.log" Nov 23 09:34:54 crc kubenswrapper[4988]: I1123 09:34:54.738629 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ce731121-aae3-4ce2-90ae-29f9b5b5a40a/nova-cell0-conductor-conductor/0.log" Nov 23 09:34:54 crc kubenswrapper[4988]: I1123 09:34:54.742115 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4c2edb10-f06f-4f09-8567-41e3c7893154/nova-api-log/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.122526 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_4c2edb10-f06f-4f09-8567-41e3c7893154/nova-api-api/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.122934 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8d7273a5-2352-4130-b94f-fe9b43fdd727/nova-cell1-conductor-conductor/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.166669 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8ba70dda-e08f-4b54-8536-9652905f571b/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.474261 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell5mq8n_ca476e09-7dd2-40e8-9904-330ae85a51e0/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.529373 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-c8j5z_c80e8bf1-ba39-4578-9aaf-500df71fe1a2/nova-cell1-openstack-openstack-cell1/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.742421 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9f76eb04-e618-4e95-a091-9a4b1a4c6065/nova-metadata-log/0.log" Nov 23 09:34:55 crc kubenswrapper[4988]: I1123 09:34:55.954889 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_038b42e6-7d25-4e90-8acd-cc9d94fedb24/nova-scheduler-scheduler/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:55.999992 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a31e193-64cd-4be2-bbe6-9b899d22c30f/mysql-bootstrap/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.163726 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a31e193-64cd-4be2-bbe6-9b899d22c30f/mysql-bootstrap/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.219848 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5a31e193-64cd-4be2-bbe6-9b899d22c30f/galera/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.413784 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9f76eb04-e618-4e95-a091-9a4b1a4c6065/nova-metadata-metadata/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.414688 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_214ef4a2-145f-4545-9f14-634ff88be88a/mysql-bootstrap/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.591359 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_214ef4a2-145f-4545-9f14-634ff88be88a/mysql-bootstrap/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.597574 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_214ef4a2-145f-4545-9f14-634ff88be88a/galera/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.705825 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a8f050d7-63e6-4f3a-b0a8-f38370327852/openstackclient/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.872091 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5/openstack-network-exporter/0.log" Nov 23 09:34:56 crc kubenswrapper[4988]: I1123 09:34:56.947873 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5fbbcf78-c7e7-40af-a7d2-1e82b6ca71c5/ovn-northd/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.138428 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a9ae31f4-b75f-41a4-8794-b12381abe024/openstack-network-exporter/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.172448 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-4skk7_daa5eecb-492f-418f-a41b-70cb8d86d9fc/ovn-openstack-openstack-cell1/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.226210 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a9ae31f4-b75f-41a4-8794-b12381abe024/ovsdbserver-nb/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.383414 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_ecf8ff15-93c9-45ec-a013-c3e043b01e8d/openstack-network-exporter/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.410535 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_ecf8ff15-93c9-45ec-a013-c3e043b01e8d/ovsdbserver-nb/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.591912 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d/ovsdbserver-nb/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.610269 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_65ac0c25-4d62-4ab7-8d8e-0c1e1145d77d/openstack-network-exporter/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.767060 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938/openstack-network-exporter/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.775838 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e13446ae-e2d8-4e82-b1fe-fa6e4fe7a938/ovsdbserver-sb/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.908873 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_4f651a69-31ca-40dd-a065-c81f64c4e34c/openstack-network-exporter/0.log" Nov 23 09:34:57 crc kubenswrapper[4988]: I1123 09:34:57.966329 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_4f651a69-31ca-40dd-a065-c81f64c4e34c/ovsdbserver-sb/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.106130 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_66e01e8e-febc-4ccc-b863-3e24332ba0f9/ovsdbserver-sb/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.109360 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_66e01e8e-febc-4ccc-b863-3e24332ba0f9/openstack-network-exporter/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.456806 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c6kdgz_1d55647b-8db7-4352-aba2-c1bc67c744e0/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.496770 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56b7cb9d84-l6fj5_58b03ea7-bd7c-488c-bc46-ba93c7029243/placement-api/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.556298 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-56b7cb9d84-l6fj5_58b03ea7-bd7c-488c-bc46-ba93c7029243/placement-log/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.646850 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_440c2cc5-deab-44af-b561-072f18b90f23/init-config-reloader/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.839505 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_440c2cc5-deab-44af-b561-072f18b90f23/config-reloader/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.850911 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_440c2cc5-deab-44af-b561-072f18b90f23/prometheus/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.854239 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_440c2cc5-deab-44af-b561-072f18b90f23/thanos-sidecar/0.log" Nov 23 09:34:58 crc kubenswrapper[4988]: I1123 09:34:58.882236 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_440c2cc5-deab-44af-b561-072f18b90f23/init-config-reloader/0.log" Nov 23 09:34:59 crc kubenswrapper[4988]: I1123 09:34:59.053612 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f746d292-0944-4bc3-8abc-71f42dbe6957/setup-container/0.log" Nov 23 09:34:59 crc kubenswrapper[4988]: I1123 09:34:59.254735 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f746d292-0944-4bc3-8abc-71f42dbe6957/setup-container/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.023467 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc03f11b-e69c-48c7-80e9-a044721fbf1e/setup-container/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.038155 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f746d292-0944-4bc3-8abc-71f42dbe6957/rabbitmq/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.242324 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc03f11b-e69c-48c7-80e9-a044721fbf1e/rabbitmq/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.270257 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_cc03f11b-e69c-48c7-80e9-a044721fbf1e/setup-container/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.276157 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-6b9qw_89e491db-3451-4397-a5f0-fcaf880606ec/reboot-os-openstack-openstack-cell1/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.492940 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-vjbjb_1d4bbee0-8089-497c-9b50-d3dedd273cf3/run-os-openstack-openstack-cell1/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.596460 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-rfdt5_46526462-8f88-47f0-a5ab-f4becf600f50/ssh-known-hosts-openstack/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.819358 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b7c8d774d-kkpgx_7eaab950-47dc-48a7-8e3c-854cec2fab5f/proxy-server/0.log" Nov 23 09:35:00 crc kubenswrapper[4988]: I1123 09:35:00.928807 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n5cwm_fff96809-3ec8-4406-a894-c22b4d13b94d/swift-ring-rebalance/0.log" Nov 23 09:35:01 crc kubenswrapper[4988]: I1123 09:35:01.001451 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6b7c8d774d-kkpgx_7eaab950-47dc-48a7-8e3c-854cec2fab5f/proxy-httpd/0.log" Nov 23 09:35:01 crc kubenswrapper[4988]: I1123 09:35:01.159535 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-67jsk_34592fcb-7601-4940-a40a-3fc5de6c9d01/telemetry-openstack-openstack-cell1/0.log" Nov 23 09:35:01 crc kubenswrapper[4988]: I1123 09:35:01.276288 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_11f8f692-04c1-427a-b77f-686e3f8409ed/tempest-tests-tempest-tests-runner/0.log" Nov 23 09:35:01 crc kubenswrapper[4988]: I1123 09:35:01.783390 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_416d4162-3900-4408-8882-0944facebc82/test-operator-logs-container/0.log" Nov 23 09:35:01 crc kubenswrapper[4988]: I1123 09:35:01.922704 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-2xdnp_7d3fd979-5f3d-43cf-a771-80668ab96673/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 23 09:35:02 crc kubenswrapper[4988]: I1123 09:35:02.003565 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-wxq47_de31961d-df81-4271-9536-e427f3b15766/validate-network-openstack-openstack-cell1/0.log" Nov 23 09:35:06 crc kubenswrapper[4988]: I1123 09:35:06.495921 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:35:06 crc kubenswrapper[4988]: E1123 09:35:06.496796 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:35:17 crc kubenswrapper[4988]: I1123 09:35:17.106877 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1cd2fb7e-7cf8-40c2-8274-f81ed5838b04/memcached/0.log" Nov 23 09:35:17 crc kubenswrapper[4988]: I1123 09:35:17.496701 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:35:17 crc kubenswrapper[4988]: E1123 09:35:17.496952 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:35:30 crc kubenswrapper[4988]: I1123 09:35:30.685709 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/util/0.log" Nov 23 09:35:30 crc kubenswrapper[4988]: I1123 09:35:30.902132 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/util/0.log" Nov 23 09:35:30 crc kubenswrapper[4988]: I1123 09:35:30.903796 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/pull/0.log" Nov 23 09:35:30 crc kubenswrapper[4988]: I1123 09:35:30.938565 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/pull/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.108380 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/util/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.145511 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/extract/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.163492 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287vdb8k_5c1af7b2-e795-4304-bbde-e70be9ae1520/pull/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.305249 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-kt7s2_9fe52e6b-130d-42a1-b6f5-334df6a86ceb/kube-rbac-proxy/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.382369 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-jvdch_08f2612c-cf4d-42a5-81df-238887b3e77d/kube-rbac-proxy/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.469740 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-kt7s2_9fe52e6b-130d-42a1-b6f5-334df6a86ceb/manager/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.603230 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-jvdch_08f2612c-cf4d-42a5-81df-238887b3e77d/manager/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.620185 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-fhfm5_5dfa0ca5-a839-48ed-be21-2f065840d1f9/kube-rbac-proxy/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.659431 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-fhfm5_5dfa0ca5-a839-48ed-be21-2f065840d1f9/manager/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.819154 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-kw5fz_74d47023-c608-443f-a863-521ec94aef70/kube-rbac-proxy/0.log" Nov 23 09:35:31 crc kubenswrapper[4988]: I1123 09:35:31.993113 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-kw5fz_74d47023-c608-443f-a863-521ec94aef70/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.067701 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-bdkmj_cbf8310b-969a-4233-9224-5fead64dda9e/kube-rbac-proxy/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.113866 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-bdkmj_cbf8310b-969a-4233-9224-5fead64dda9e/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.200050 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-5qfzn_a583f2f1-c89f-499a-a884-959c259bb45f/kube-rbac-proxy/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.288679 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-5qfzn_a583f2f1-c89f-499a-a884-959c259bb45f/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.353132 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-hv2bc_f572ed19-6b6b-43e6-a2b9-b59bfe403460/kube-rbac-proxy/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.496392 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:35:32 crc kubenswrapper[4988]: E1123 09:35:32.496684 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.546396 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-4fk5h_c8611d57-ca99-4b6d-ade8-f7f3bce489f4/kube-rbac-proxy/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.591850 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-4fk5h_c8611d57-ca99-4b6d-ade8-f7f3bce489f4/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.657873 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-hv2bc_f572ed19-6b6b-43e6-a2b9-b59bfe403460/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.721830 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-l44mr_e9c6dc0a-9868-4554-8563-d6b83ed3d26b/kube-rbac-proxy/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.928543 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-l44mr_e9c6dc0a-9868-4554-8563-d6b83ed3d26b/manager/0.log" Nov 23 09:35:32 crc kubenswrapper[4988]: I1123 09:35:32.957121 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4rjh7_9cac3108-fd73-4cfb-a801-b255fcaf9860/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.020864 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-4rjh7_9cac3108-fd73-4cfb-a801-b255fcaf9860/manager/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.108963 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-pnlkp_3c7c9d25-87ae-421d-9f54-f74ff0b68e49/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.228252 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-pnlkp_3c7c9d25-87ae-421d-9f54-f74ff0b68e49/manager/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.324300 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-5jtrf_bf7f663a-839b-4e58-b112-4da1e76f2def/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.425062 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-5jtrf_bf7f663a-839b-4e58-b112-4da1e76f2def/manager/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.506957 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-cbktd_e3313b12-ee41-4c3d-82dd-be1f78194c70/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.680279 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-cbktd_e3313b12-ee41-4c3d-82dd-be1f78194c70/manager/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.765089 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-5hc84_84928c23-bc05-448f-bd61-ce4f32c0edea/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.776653 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-5hc84_84928c23-bc05-448f-bd61-ce4f32c0edea/manager/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.922650 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd447lpqt_f7d8486e-f61f-46a0-9e04-1fefad43dede/kube-rbac-proxy/0.log" Nov 23 09:35:33 crc kubenswrapper[4988]: I1123 09:35:33.964956 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd447lpqt_f7d8486e-f61f-46a0-9e04-1fefad43dede/manager/0.log" Nov 23 09:35:34 crc kubenswrapper[4988]: I1123 09:35:34.161127 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-rdk8z_ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8/kube-rbac-proxy/0.log" Nov 23 09:35:34 crc kubenswrapper[4988]: I1123 09:35:34.227949 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-vvsss_82239a25-26cf-4cfd-b41a-c427b2289d76/kube-rbac-proxy/0.log" Nov 23 09:35:34 crc kubenswrapper[4988]: I1123 09:35:34.691497 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-vvsss_82239a25-26cf-4cfd-b41a-c427b2289d76/operator/0.log" Nov 23 09:35:34 crc kubenswrapper[4988]: I1123 09:35:34.764459 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-hk9rx_a16f0be7-aa48-4002-b8e4-382ae125a870/kube-rbac-proxy/0.log" Nov 23 09:35:34 crc kubenswrapper[4988]: I1123 09:35:34.822758 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xfz5r_723ca5e4-78e1-4e98-af52-13b7f8325692/registry-server/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.025776 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-hk9rx_a16f0be7-aa48-4002-b8e4-382ae125a870/manager/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.049263 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-rnmg7_55fb8350-1b05-4077-829b-37df675ff824/kube-rbac-proxy/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.166813 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-rnmg7_55fb8350-1b05-4077-829b-37df675ff824/manager/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.296404 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-6gs8g_6fb87584-e4bf-4ed9-9aa4-7e61fb8b0128/operator/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.425223 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-qzzrd_18e43e77-85d1-4d8f-a8f4-06c8e121b817/kube-rbac-proxy/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.466033 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-qzzrd_18e43e77-85d1-4d8f-a8f4-06c8e121b817/manager/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.485672 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-xrsgl_a29c09da-ecac-46e1-9680-523e311135ed/kube-rbac-proxy/0.log" Nov 23 09:35:35 crc kubenswrapper[4988]: I1123 09:35:35.710846 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-8xtzh_269b1e70-13f3-412b-a957-a47eb5713b1e/kube-rbac-proxy/0.log" Nov 23 09:35:36 crc kubenswrapper[4988]: I1123 09:35:36.013100 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-sghjg_345e22ca-f6b2-417f-80ab-c59f9957fd20/kube-rbac-proxy/0.log" Nov 23 09:35:36 crc kubenswrapper[4988]: I1123 09:35:36.825554 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-sghjg_345e22ca-f6b2-417f-80ab-c59f9957fd20/manager/0.log" Nov 23 09:35:36 crc kubenswrapper[4988]: I1123 09:35:36.901993 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-8xtzh_269b1e70-13f3-412b-a957-a47eb5713b1e/manager/0.log" Nov 23 09:35:37 crc kubenswrapper[4988]: I1123 09:35:37.031663 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-xrsgl_a29c09da-ecac-46e1-9680-523e311135ed/manager/0.log" Nov 23 09:35:37 crc kubenswrapper[4988]: I1123 09:35:37.622850 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-rdk8z_ea3cebe7-9cdf-4d1c-aee7-0f662336f5d8/manager/0.log" Nov 23 09:35:45 crc kubenswrapper[4988]: I1123 09:35:45.497034 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:35:45 crc kubenswrapper[4988]: E1123 09:35:45.497952 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:35:54 crc kubenswrapper[4988]: I1123 09:35:54.298686 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f6zkr_a906f419-a9c8-480a-9824-4a9971c6d1ec/control-plane-machine-set-operator/0.log" Nov 23 09:35:54 crc kubenswrapper[4988]: I1123 09:35:54.320790 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-j9nr5_5aec85a9-cf10-4f54-9269-aab56ed0378a/kube-rbac-proxy/0.log" Nov 23 09:35:54 crc kubenswrapper[4988]: I1123 09:35:54.473132 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-j9nr5_5aec85a9-cf10-4f54-9269-aab56ed0378a/machine-api-operator/0.log" Nov 23 09:35:58 crc kubenswrapper[4988]: I1123 09:35:58.504552 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:35:58 crc kubenswrapper[4988]: E1123 09:35:58.505415 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:36:08 crc kubenswrapper[4988]: I1123 09:36:08.514316 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-49vd4_7423cc4d-808d-439a-b38c-106914e44da4/cert-manager-controller/0.log" Nov 23 09:36:08 crc kubenswrapper[4988]: I1123 09:36:08.721125 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cvznd_5cf554b8-f67a-43b4-83a0-c8c1819552a5/cert-manager-webhook/0.log" Nov 23 09:36:08 crc kubenswrapper[4988]: I1123 09:36:08.798311 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-rxnkn_3cc0e918-fd10-4cc3-b406-3c7dd4e31945/cert-manager-cainjector/0.log" Nov 23 09:36:12 crc kubenswrapper[4988]: I1123 09:36:12.497392 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:36:12 crc kubenswrapper[4988]: E1123 09:36:12.500071 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:36:23 crc kubenswrapper[4988]: I1123 09:36:23.838505 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-wlvj5_deca23f5-716a-4d4e-88ae-dd9315c77268/nmstate-console-plugin/0.log" Nov 23 09:36:24 crc kubenswrapper[4988]: I1123 09:36:24.029094 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-vwth7_fe77a552-6fcd-45d9-9d61-e242fdb0c4a2/kube-rbac-proxy/0.log" Nov 23 09:36:24 crc kubenswrapper[4988]: I1123 09:36:24.067318 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-vwth7_fe77a552-6fcd-45d9-9d61-e242fdb0c4a2/nmstate-metrics/0.log" Nov 23 09:36:24 crc kubenswrapper[4988]: I1123 09:36:24.068907 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2vt9j_026e45ed-f994-494d-ae7a-32e215f95cd2/nmstate-handler/0.log" Nov 23 09:36:24 crc kubenswrapper[4988]: I1123 09:36:24.226077 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-nwg45_c385a6b1-1890-43aa-9929-3a4a4fcb399c/nmstate-operator/0.log" Nov 23 09:36:24 crc kubenswrapper[4988]: I1123 09:36:24.257484 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-ldrdz_69fa7667-f139-4534-b39c-ac6c41f078c9/nmstate-webhook/0.log" Nov 23 09:36:27 crc kubenswrapper[4988]: I1123 09:36:27.496056 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:36:27 crc kubenswrapper[4988]: E1123 09:36:27.496711 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:36:38 crc kubenswrapper[4988]: I1123 09:36:38.503896 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:36:38 crc kubenswrapper[4988]: E1123 09:36:38.504674 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.371561 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-9rgvp_1c39f006-2177-4afd-a5d5-a869f8aabad6/kube-rbac-proxy/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.475233 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-9rgvp_1c39f006-2177-4afd-a5d5-a869f8aabad6/controller/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.533083 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-frr-files/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.673904 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-reloader/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.674549 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-metrics/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.675809 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-frr-files/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.716025 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-reloader/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.918160 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-metrics/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.928334 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-metrics/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.934523 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-reloader/0.log" Nov 23 09:36:40 crc kubenswrapper[4988]: I1123 09:36:40.936595 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-frr-files/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.098053 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-frr-files/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.149042 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-reloader/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.153582 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/cp-metrics/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.188916 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/controller/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.360032 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/frr-metrics/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.360115 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/kube-rbac-proxy/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.418605 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/kube-rbac-proxy-frr/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.581367 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/reloader/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.625453 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-6thck_101dd547-04aa-4e7d-9464-14100da79eed/frr-k8s-webhook-server/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.828935 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-79595c987c-ws8kg_ffe4a76d-1690-40e0-acc7-56a52602cc77/manager/0.log" Nov 23 09:36:41 crc kubenswrapper[4988]: I1123 09:36:41.929291 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-cffff7689-6qc7z_c08dbb11-869b-4dec-bb8a-67b5c693cd70/webhook-server/0.log" Nov 23 09:36:42 crc kubenswrapper[4988]: I1123 09:36:42.040962 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6flql_13cf1d15-67f3-4424-a807-f508e85f2a26/kube-rbac-proxy/0.log" Nov 23 09:36:42 crc kubenswrapper[4988]: I1123 09:36:42.888166 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6flql_13cf1d15-67f3-4424-a807-f508e85f2a26/speaker/0.log" Nov 23 09:36:44 crc kubenswrapper[4988]: I1123 09:36:44.576821 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l95c9_7664e29b-309a-4f06-bfd5-6fc10d70479e/frr/0.log" Nov 23 09:36:53 crc kubenswrapper[4988]: I1123 09:36:53.496609 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:36:53 crc kubenswrapper[4988]: E1123 09:36:53.497442 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:36:56 crc kubenswrapper[4988]: I1123 09:36:56.914504 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.104680 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.113127 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.153926 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.295291 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.321941 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.347393 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajj6bh_9e3d6293-1a2e-4ca5-84a9-26c341a9ea6f/extract/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.475082 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.582213 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.625456 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.653853 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.754008 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/pull/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.781914 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/util/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.797679 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772e4tdzg_fdd62a47-453f-42a2-a73a-5dba3633b5be/extract/0.log" Nov 23 09:36:57 crc kubenswrapper[4988]: I1123 09:36:57.947170 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/util/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.109053 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/util/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.130295 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/pull/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.147146 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/pull/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.280910 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/extract/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.303310 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/pull/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.321337 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210mpklr_504adff5-b08d-4279-8853-26c0b3847e79/util/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.457809 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-utilities/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.643978 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-utilities/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.658648 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-content/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.663239 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-content/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.852145 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-utilities/0.log" Nov 23 09:36:58 crc kubenswrapper[4988]: I1123 09:36:58.882354 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/extract-content/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.071926 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-utilities/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.270443 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-utilities/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.341981 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-content/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.363428 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-content/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.525246 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-utilities/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.602110 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/extract-content/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.754167 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/util/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.953383 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/pull/0.log" Nov 23 09:36:59 crc kubenswrapper[4988]: I1123 09:36:59.984405 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/util/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.002470 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/pull/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.212858 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/util/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.225206 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/extract/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.248547 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-75tvw_48ee2284-9e6c-4049-bf43-9473c176ca62/registry-server/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.251030 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6w84bg_61d7d35a-02bc-4b40-848b-e773652f2691/pull/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.421862 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-g4fsw_dcf0e02c-4654-4fe4-aedb-3817fd1d4221/marketplace-operator/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.466962 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-utilities/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.714618 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-utilities/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.723466 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-content/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.730806 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7xr2d_a82cd923-5d18-4b12-9a2b-bd52e81b68c4/registry-server/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.768494 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-content/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.875888 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-utilities/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.921617 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/extract-content/0.log" Nov 23 09:37:00 crc kubenswrapper[4988]: I1123 09:37:00.962942 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-utilities/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.197408 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-utilities/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.220981 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gpdnr_6a120914-49bb-4c6c-9b35-7e89e7749110/registry-server/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.245767 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-content/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.246808 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-content/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.370255 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-utilities/0.log" Nov 23 09:37:01 crc kubenswrapper[4988]: I1123 09:37:01.390679 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/extract-content/0.log" Nov 23 09:37:03 crc kubenswrapper[4988]: I1123 09:37:03.006187 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9rjm_8e2aaec8-1a4e-4655-ba7f-ce2d2065920d/registry-server/0.log" Nov 23 09:37:04 crc kubenswrapper[4988]: I1123 09:37:04.497353 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:37:04 crc kubenswrapper[4988]: E1123 09:37:04.498181 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:37:15 crc kubenswrapper[4988]: I1123 09:37:15.106559 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-fqkjp_e2904ef0-72de-43d7-8d49-b1f8e828cd51/prometheus-operator/0.log" Nov 23 09:37:16 crc kubenswrapper[4988]: I1123 09:37:16.093530 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6984fddfb7-srfv7_56cbbfcd-be09-45ab-97f1-eb8003e190d7/prometheus-operator-admission-webhook/0.log" Nov 23 09:37:16 crc kubenswrapper[4988]: I1123 09:37:16.158735 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6984fddfb7-s75dd_bdef66a9-8c6f-4b29-a47b-f46a4694696b/prometheus-operator-admission-webhook/0.log" Nov 23 09:37:16 crc kubenswrapper[4988]: I1123 09:37:16.324906 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-dlwlq_5e20fbfc-63e7-4be1-92aa-f4f8d724b112/operator/0.log" Nov 23 09:37:16 crc kubenswrapper[4988]: I1123 09:37:16.357103 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-qgshk_cebba8f4-982d-470a-bef5-05e27811a64b/perses-operator/0.log" Nov 23 09:37:17 crc kubenswrapper[4988]: I1123 09:37:17.496057 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:37:17 crc kubenswrapper[4988]: E1123 09:37:17.496744 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:37:28 crc kubenswrapper[4988]: I1123 09:37:28.509175 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:37:28 crc kubenswrapper[4988]: E1123 09:37:28.511790 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:37:39 crc kubenswrapper[4988]: I1123 09:37:39.496049 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:37:39 crc kubenswrapper[4988]: E1123 09:37:39.497012 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.793590 4988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795716 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="extract-content" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795730 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="extract-content" Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795745 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="extract-content" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795751 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="extract-content" Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795763 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795769 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795780 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="extract-utilities" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795787 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="extract-utilities" Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795802 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795808 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: E1123 09:37:48.795825 4988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="extract-utilities" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.795830 4988 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="extract-utilities" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.796015 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebe2475-31e1-4008-bd53-c71d60b8e3cf" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.796029 4988 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb3ea761-d896-49b6-b243-156adfdb47d4" containerName="registry-server" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.797744 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.818234 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.846952 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw24q\" (UniqueName: \"kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.847059 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.847324 4988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.949660 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw24q\" (UniqueName: \"kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.950379 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.950918 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.951092 4988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.951401 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:48 crc kubenswrapper[4988]: I1123 09:37:48.969454 4988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw24q\" (UniqueName: \"kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q\") pod \"redhat-marketplace-4g4qb\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:49 crc kubenswrapper[4988]: I1123 09:37:49.123047 4988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:49 crc kubenswrapper[4988]: I1123 09:37:49.698169 4988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:37:49 crc kubenswrapper[4988]: I1123 09:37:49.899033 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerStarted","Data":"71a4e12c0481c4b7decb14e2d65b917e478e2b931deda7f4d48b238286e0c038"} Nov 23 09:37:50 crc kubenswrapper[4988]: I1123 09:37:50.909210 4988 generic.go:334] "Generic (PLEG): container finished" podID="b6193c00-12c7-44a1-97d5-b6268c5a4e60" containerID="236f099c363140260081b73ee662be45f11dcc87bfdab961c8c5d57fa4cce9f6" exitCode=0 Nov 23 09:37:50 crc kubenswrapper[4988]: I1123 09:37:50.909331 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerDied","Data":"236f099c363140260081b73ee662be45f11dcc87bfdab961c8c5d57fa4cce9f6"} Nov 23 09:37:50 crc kubenswrapper[4988]: I1123 09:37:50.911667 4988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:37:52 crc kubenswrapper[4988]: I1123 09:37:52.942062 4988 generic.go:334] "Generic (PLEG): container finished" podID="b6193c00-12c7-44a1-97d5-b6268c5a4e60" containerID="e66c4d4e102f8117dfec7c6cf2ead58b7f297f7605cd01a3e3a59435bb2e0b80" exitCode=0 Nov 23 09:37:52 crc kubenswrapper[4988]: I1123 09:37:52.942185 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerDied","Data":"e66c4d4e102f8117dfec7c6cf2ead58b7f297f7605cd01a3e3a59435bb2e0b80"} Nov 23 09:37:53 crc kubenswrapper[4988]: I1123 09:37:53.957947 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerStarted","Data":"ab4add3f314ada955ebacde1f5928a5e375ebf9f30db9773eff9dc0cebe9ae51"} Nov 23 09:37:53 crc kubenswrapper[4988]: I1123 09:37:53.983035 4988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4g4qb" podStartSLOduration=3.57821457 podStartE2EDuration="5.983018645s" podCreationTimestamp="2025-11-23 09:37:48 +0000 UTC" firstStartedPulling="2025-11-23 09:37:50.91142386 +0000 UTC m=+10323.219936623" lastFinishedPulling="2025-11-23 09:37:53.316227935 +0000 UTC m=+10325.624740698" observedRunningTime="2025-11-23 09:37:53.975699144 +0000 UTC m=+10326.284211927" watchObservedRunningTime="2025-11-23 09:37:53.983018645 +0000 UTC m=+10326.291531408" Nov 23 09:37:54 crc kubenswrapper[4988]: I1123 09:37:54.496471 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:37:54 crc kubenswrapper[4988]: E1123 09:37:54.496784 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:37:59 crc kubenswrapper[4988]: I1123 09:37:59.124312 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:59 crc kubenswrapper[4988]: I1123 09:37:59.125023 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:37:59 crc kubenswrapper[4988]: I1123 09:37:59.189698 4988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:38:00 crc kubenswrapper[4988]: I1123 09:38:00.128024 4988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:38:00 crc kubenswrapper[4988]: I1123 09:38:00.181720 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:38:02 crc kubenswrapper[4988]: I1123 09:38:02.064367 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4g4qb" podUID="b6193c00-12c7-44a1-97d5-b6268c5a4e60" containerName="registry-server" containerID="cri-o://ab4add3f314ada955ebacde1f5928a5e375ebf9f30db9773eff9dc0cebe9ae51" gracePeriod=2 Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.078734 4988 generic.go:334] "Generic (PLEG): container finished" podID="b6193c00-12c7-44a1-97d5-b6268c5a4e60" containerID="ab4add3f314ada955ebacde1f5928a5e375ebf9f30db9773eff9dc0cebe9ae51" exitCode=0 Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.078762 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerDied","Data":"ab4add3f314ada955ebacde1f5928a5e375ebf9f30db9773eff9dc0cebe9ae51"} Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.546428 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.713757 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw24q\" (UniqueName: \"kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q\") pod \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.713833 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content\") pod \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.713913 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities\") pod \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\" (UID: \"b6193c00-12c7-44a1-97d5-b6268c5a4e60\") " Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.725415 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities" (OuterVolumeSpecName: "utilities") pod "b6193c00-12c7-44a1-97d5-b6268c5a4e60" (UID: "b6193c00-12c7-44a1-97d5-b6268c5a4e60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.726052 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q" (OuterVolumeSpecName: "kube-api-access-dw24q") pod "b6193c00-12c7-44a1-97d5-b6268c5a4e60" (UID: "b6193c00-12c7-44a1-97d5-b6268c5a4e60"). InnerVolumeSpecName "kube-api-access-dw24q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.739606 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6193c00-12c7-44a1-97d5-b6268c5a4e60" (UID: "b6193c00-12c7-44a1-97d5-b6268c5a4e60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.816651 4988 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.816892 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw24q\" (UniqueName: \"kubernetes.io/projected/b6193c00-12c7-44a1-97d5-b6268c5a4e60-kube-api-access-dw24q\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:03 crc kubenswrapper[4988]: I1123 09:38:03.817013 4988 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6193c00-12c7-44a1-97d5-b6268c5a4e60-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.096354 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g4qb" event={"ID":"b6193c00-12c7-44a1-97d5-b6268c5a4e60","Type":"ContainerDied","Data":"71a4e12c0481c4b7decb14e2d65b917e478e2b931deda7f4d48b238286e0c038"} Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.096408 4988 scope.go:117] "RemoveContainer" containerID="ab4add3f314ada955ebacde1f5928a5e375ebf9f30db9773eff9dc0cebe9ae51" Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.096545 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g4qb" Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.161554 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.169464 4988 scope.go:117] "RemoveContainer" containerID="e66c4d4e102f8117dfec7c6cf2ead58b7f297f7605cd01a3e3a59435bb2e0b80" Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.199880 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g4qb"] Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.202033 4988 scope.go:117] "RemoveContainer" containerID="236f099c363140260081b73ee662be45f11dcc87bfdab961c8c5d57fa4cce9f6" Nov 23 09:38:04 crc kubenswrapper[4988]: I1123 09:38:04.516801 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6193c00-12c7-44a1-97d5-b6268c5a4e60" path="/var/lib/kubelet/pods/b6193c00-12c7-44a1-97d5-b6268c5a4e60/volumes" Nov 23 09:38:09 crc kubenswrapper[4988]: I1123 09:38:09.497545 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:38:09 crc kubenswrapper[4988]: E1123 09:38:09.498292 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:38:24 crc kubenswrapper[4988]: I1123 09:38:24.496694 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:38:24 crc kubenswrapper[4988]: E1123 09:38:24.497424 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:38:36 crc kubenswrapper[4988]: I1123 09:38:36.498138 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:38:36 crc kubenswrapper[4988]: E1123 09:38:36.498902 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:38:47 crc kubenswrapper[4988]: I1123 09:38:47.497095 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:38:47 crc kubenswrapper[4988]: E1123 09:38:47.497946 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:39:01 crc kubenswrapper[4988]: I1123 09:39:01.497253 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:39:01 crc kubenswrapper[4988]: E1123 09:39:01.501503 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:39:13 crc kubenswrapper[4988]: I1123 09:39:13.496671 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:39:13 crc kubenswrapper[4988]: E1123 09:39:13.497677 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:39:24 crc kubenswrapper[4988]: I1123 09:39:24.052564 4988 scope.go:117] "RemoveContainer" containerID="e8defc95a8b64e3a2f1ef172c794c16d5a217efb869ca228f900f911e0e582c6" Nov 23 09:39:26 crc kubenswrapper[4988]: I1123 09:39:26.496278 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:39:26 crc kubenswrapper[4988]: E1123 09:39:26.497108 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:39:26 crc kubenswrapper[4988]: I1123 09:39:26.983001 4988 generic.go:334] "Generic (PLEG): container finished" podID="0369f4b3-b212-4bbf-b150-c4264029e027" containerID="a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901" exitCode=0 Nov 23 09:39:26 crc kubenswrapper[4988]: I1123 09:39:26.983043 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rbvk2/must-gather-t22m6" event={"ID":"0369f4b3-b212-4bbf-b150-c4264029e027","Type":"ContainerDied","Data":"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901"} Nov 23 09:39:26 crc kubenswrapper[4988]: I1123 09:39:26.984044 4988 scope.go:117] "RemoveContainer" containerID="a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901" Nov 23 09:39:27 crc kubenswrapper[4988]: I1123 09:39:27.093148 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvk2_must-gather-t22m6_0369f4b3-b212-4bbf-b150-c4264029e027/gather/0.log" Nov 23 09:39:36 crc kubenswrapper[4988]: I1123 09:39:36.428119 4988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rbvk2/must-gather-t22m6"] Nov 23 09:39:36 crc kubenswrapper[4988]: I1123 09:39:36.429927 4988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rbvk2/must-gather-t22m6" podUID="0369f4b3-b212-4bbf-b150-c4264029e027" containerName="copy" containerID="cri-o://47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd" gracePeriod=2 Nov 23 09:39:36 crc kubenswrapper[4988]: I1123 09:39:36.441167 4988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rbvk2/must-gather-t22m6"] Nov 23 09:39:36 crc kubenswrapper[4988]: I1123 09:39:36.954054 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvk2_must-gather-t22m6_0369f4b3-b212-4bbf-b150-c4264029e027/copy/0.log" Nov 23 09:39:36 crc kubenswrapper[4988]: I1123 09:39:36.955057 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.081677 4988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rbvk2_must-gather-t22m6_0369f4b3-b212-4bbf-b150-c4264029e027/copy/0.log" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.081978 4988 generic.go:334] "Generic (PLEG): container finished" podID="0369f4b3-b212-4bbf-b150-c4264029e027" containerID="47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd" exitCode=143 Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.082025 4988 scope.go:117] "RemoveContainer" containerID="47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.082109 4988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rbvk2/must-gather-t22m6" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.098114 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output\") pod \"0369f4b3-b212-4bbf-b150-c4264029e027\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.098232 4988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psrb8\" (UniqueName: \"kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8\") pod \"0369f4b3-b212-4bbf-b150-c4264029e027\" (UID: \"0369f4b3-b212-4bbf-b150-c4264029e027\") " Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.116189 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8" (OuterVolumeSpecName: "kube-api-access-psrb8") pod "0369f4b3-b212-4bbf-b150-c4264029e027" (UID: "0369f4b3-b212-4bbf-b150-c4264029e027"). InnerVolumeSpecName "kube-api-access-psrb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.119357 4988 scope.go:117] "RemoveContainer" containerID="a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.202497 4988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psrb8\" (UniqueName: \"kubernetes.io/projected/0369f4b3-b212-4bbf-b150-c4264029e027-kube-api-access-psrb8\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.219706 4988 scope.go:117] "RemoveContainer" containerID="47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd" Nov 23 09:39:37 crc kubenswrapper[4988]: E1123 09:39:37.220376 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd\": container with ID starting with 47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd not found: ID does not exist" containerID="47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.220419 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd"} err="failed to get container status \"47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd\": rpc error: code = NotFound desc = could not find container \"47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd\": container with ID starting with 47f32bb8a43452c891b0daefdae9d0db3b031c3b9d43d15c9b713b5f66834abd not found: ID does not exist" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.220442 4988 scope.go:117] "RemoveContainer" containerID="a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901" Nov 23 09:39:37 crc kubenswrapper[4988]: E1123 09:39:37.220778 4988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901\": container with ID starting with a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901 not found: ID does not exist" containerID="a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.220802 4988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901"} err="failed to get container status \"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901\": rpc error: code = NotFound desc = could not find container \"a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901\": container with ID starting with a541e218da11a0f6a86c4af2a98d8810c1522b26830d3d1788bb2deb55008901 not found: ID does not exist" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.327259 4988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0369f4b3-b212-4bbf-b150-c4264029e027" (UID: "0369f4b3-b212-4bbf-b150-c4264029e027"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.406792 4988 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0369f4b3-b212-4bbf-b150-c4264029e027-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:37 crc kubenswrapper[4988]: I1123 09:39:37.496206 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:39:37 crc kubenswrapper[4988]: E1123 09:39:37.498286 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:39:38 crc kubenswrapper[4988]: I1123 09:39:38.516966 4988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0369f4b3-b212-4bbf-b150-c4264029e027" path="/var/lib/kubelet/pods/0369f4b3-b212-4bbf-b150-c4264029e027/volumes" Nov 23 09:39:50 crc kubenswrapper[4988]: I1123 09:39:50.497284 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:39:50 crc kubenswrapper[4988]: E1123 09:39:50.498069 4988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jnwbw_openshift-machine-config-operator(a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" podUID="a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1" Nov 23 09:40:03 crc kubenswrapper[4988]: I1123 09:40:03.496649 4988 scope.go:117] "RemoveContainer" containerID="e05390516ac3dcb87e54d7bf52d023531993db15b1a2d4e12833500309cca23c" Nov 23 09:40:04 crc kubenswrapper[4988]: I1123 09:40:04.371860 4988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jnwbw" event={"ID":"a8eace8c-a9a8-4d0b-ab50-c9cfde78b7c1","Type":"ContainerStarted","Data":"fb4581ddfbb7bd4c5fe9cb93cbd8a57cb71efbfac5e77ae369a353a94033704d"}